[HN Gopher] What Does It Mean for AI to Understand?
       ___________________________________________________________________
        
       What Does It Mean for AI to Understand?
        
       Author : theafh
       Score  : 83 points
       Date   : 2021-12-16 14:52 UTC (8 hours ago)
        
 (HTM) web link (www.quantamagazine.org)
 (TXT) w3m dump (www.quantamagazine.org)
        
       | machiaweliczny wrote:
       | BTW humans are generally stupid. We are only smart in collective
       | with all trial/error saved in books, so don't underestimate
       | computers. Also most of progress happens by outliers in humanity
       | so it can be after AIs start to collaborate. Distributed MoE
       | architectures are step in this direction.
       | 
       | My prediction: codex and programming/operating APIs will give NNs
       | ultra boost, then programming/math gets automated (hard to
       | master, easy to verify) and rest/us will be history. John Conor
       | is already alive. Mark my words.
        
       | mcguire wrote:
       | " _A Winograd schema, named for the language researcher Terry
       | Winograd, consists of a pair of sentences, differing by exactly
       | one word, each followed by a question. ... In each sentence pair,
       | the one-word difference can change which thing or person a
       | pronoun refers to. Answering these questions correctly seems to
       | require commonsense understanding._ "
       | 
       | I don't buy it. These Winograd schemas are _significantly_
       | limited and are exactly the sort of thing that  "story
       | understanding" systems from the '70s and '80s were designed for.
        
       | trabant00 wrote:
       | It's incredible how stupidly powerful the rational part of the
       | human brain is. It has this unlimited capacity to get lost in
       | details.
       | 
       | "What does it mean for AI to Understand?" - we keep arguing over
       | definitions and moving the goal posts to make it seem we are an
       | step closer to reaching AI.
       | 
       | When my first AI coworker will read the on-boarding docs and
       | start solving Jira issues I will have no doubt we have done it.
       | That simple.
       | 
       | Does anybody believe that an entity that would actually develop
       | AI would start selling it? They would keep it for themselves and
       | literally take over the world! Complete domination of the digital
       | realm is one of easiest things an AI could do - I believe a lot
       | easier than driving a car. And that alone would make them God.
       | 
       | When the first true AI will be born we will simply live the
       | experience. Imagine being there when we learned to control fire.
       | Would you argue over the definition of it? The size, flame color,
       | temperature and so on? Something that great can not be denied by
       | such small details.
       | 
       | LE: what happened to the Touring test? We forgot about it or does
       | ordering things from Amazon when we command smartphone assistants
       | to turn down the lights actually fools us?
        
         | PeterisP wrote:
         | Regarding Turing test, it turned out that fooling people is
         | easier than it might seem, so it's not considered anymore a
         | reasonable qualification for general artificial intelligence as
         | everybody now presumes that it's possible for a system to exist
         | that can beat the Turing test but doesn't even attempt to be
         | intelligent in any reasonable/general way.
        
           | bryan0 wrote:
           | I don't know why this is a common viewpoint. A proper Turing
           | test with trained judges and human subjects who are actually
           | trying to convince the judge they are human still seems like
           | the best test of intelligence IMO.
        
           | [deleted]
        
       | toisanji wrote:
       | a simulation engine in the human mind seems required for
       | understanding: https://blog.jtoy.net/on-the-wondrous-human-
       | simulation-engin...
        
       | throwawayai12 wrote:
       | I always get a little confused when people quote Oren. He hasn't
       | been involved in meaningful work in AI in decades despite leading
       | a large, well-funded group.
       | 
       | But boy howdy can be give talks that sound good to lay people.
        
       | wombatmobile wrote:
       | > What Does It Mean for AI to Understand?
       | 
       | What does it mean for a human to understand?
        
       | nqzero wrote:
       | What does it mean for a human to understand ?
        
         | prometheus76 wrote:
         | And to extend your question: write something sarcastic on the
         | internet and you'll quickly find out there are multiple layers
         | of "understanding" something.
        
       | cgearhart wrote:
       | Ignoring the associate debacle, the characterization of large
       | language models as "stochastic parrots"[1] is the most accurate
       | description I think I've ever heard for the capabilities of AI
       | language models. These models don't understand that a mistake on
       | a Winograd question is not the same as a mistake on a medical
       | diagnosis (as a contrived example).
       | 
       | [1] https://dl.acm.org/doi/10.1145/3442188.3445922
        
         | visarga wrote:
         | I don't think it's a good name. GPT-3 is more like a dreaming
         | hallucination machine. Humans do nonsensical things in their
         | dreams, same kind of non sequitur as the language models. What
         | GPT-3 lacks is being able to wake up from its disembodied
         | hallucinations, in other words a body and something to motivate
         | its actions.
        
       | rbanffy wrote:
       | What does "understanding" mean?
        
         | therobot24 wrote:
         | Situational Awareness is largely separated into 3 levels --
         | first level (perception), 2nd level (comprehension),and 3rd
         | level (prediction)
         | 
         | Understanding while not completely described by situational
         | awareness definitely has some relationship to it and you could
         | probably use similar constructs for defining it
        
         | robbedpeter wrote:
         | What it means for humans to understand has been posited by Jeff
         | Hawkins as a combination of synaptic links within and between
         | cortical columns, resulting in a physical neural construct that
         | activates when stimulated past a sufficient threshold of
         | inbound signals. Constructs can suppress nearby clusters of
         | neurons, or contribute an increase in readiness to signal, or
         | self modulate the activation threshold of constructs within the
         | cortical column.
         | 
         | The findings are consistent with current understanding of
         | neuroscience, and align with discoveries such as grid cells.
         | They also provide a basis for explaining what's actually
         | happening in the brain with phenomena like memory palaces,
         | rapid and efficient physical control of the body, combining
         | sense modalities such as sight and touch when catching a ball,
         | and so on.
         | 
         | Understanding is what happens when your brain has developed a
         | neural structure such that it's able to predict events
         | successfully based on the thing that was understood.
        
           | jjbinx007 wrote:
           | Children learn thousands of words growing up not by having
           | the definition read to them from a dictionary but by
           | inferring their meaning based on context.
           | 
           | Likewise we learn general concepts and they can be applied to
           | a wide range of scenarios. I can't see any other way AI could
           | learn unless it mimicks our own.
        
         | motohagiography wrote:
         | This is the important question. After reading Ted Chiang's
         | story "Understand" about this question, a simple working
         | definition could be that to understand something you apprehend
         | the domain of the subject from its substrate. Hence Feynman's
         | "what I cannot create I do not understand," as well.
         | 
         | In this sense, an AI could be said to _Understand_ language if
         | it used it as one of a selection of tools to operate on itself,
         | a peer or other being, or its environment.
        
         | AndrewOMartin wrote:
         | Very droll, but notice that I can write the phrase "Don't make
         | snarky comments" without us having previously agreed on a
         | formal definition of "snark".
        
           | selestify wrote:
           | Right, so you've demonstrated that the phenomenon exists.
           | Now, what defines that phenomenon, exactly?
        
             | gooseus wrote:
             | I'd say "understanding" is less one thing and more of a
             | collection of capabilities which work together to allow
             | understanding to "emerge".
             | 
             | One of those capabilities would be the ability to
             | contextual an object/statement within multiple frames of
             | reference while also being able to compare and contrast the
             | different instances of those contextualized
             | objects/statements.
             | 
             | This is what allows a child to identify a bird as any
             | number of physical specimens of different species (chicken,
             | goose, eagle, sparrow), while also identifying cartoon
             | depictions that talk and simple drawings (Twitter icon) as
             | birds as well... while also "understanding" that while the
             | Twitter icon can be called a bird, it is not actually a
             | real bird ("Ceci n'est pas une pipe") and it would not be
             | expected to sing or fly like a backyard sparrow (unless it
             | was animated, which would make still make sense to a
             | child).
             | 
             | I think this also what rise to our ability to "understand"
             | jokes, puns, and other turns of phrase - "I just flew in
             | from Boston, and boy are my arms tired!" - this dumb joke
             | requires a number of concepts that need to be
             | contextualized before you can "get" the absurdity of
             | stating that a human might tire out their arms while
             | flying... like you might think a bird would.
        
           | TheOtherHobbes wrote:
           | That's because snark is a pre-installed module in the Western
           | Human Online Communication Library Codebase.
           | 
           | The code for "understanding" is part of Generic Human
           | Firmware codebase and is tightly integrated with the OS.
           | 
           | Unfortunately it hasn't been open sourced.
        
             | PaulDavisThe1st wrote:
             | It's totally open source, just written in a language we
             | don't (yet) understand.
        
               | teddyh wrote:
               | I.e. a binary blob.
        
       | KhoomeiK wrote:
       | People outside the NLP research community need to understand that
       | a language model does nothing more than calculate probabilties
       | for the next word given some context. The way it learns that
       | probabilistic function can be quite complex, involving billions
       | of parameters, but it's still fundamentally the same. Most of the
       | increase in performance in recent years has come from the ability
       | to train LMs that better generalize this probability function
       | from huge text corpora--functions that better interleave between
       | the datapoints it's trained on.
       | 
       | Humans use language with purpose, to complete tasks and
       | communicate with others, but GPT-3 has no more goals or desires
       | than an n-gram model from the 90s. LMs are essentially a faculty
       | for syntactically well-formed or intuitive/system 1 language
       | generation, but they don't seem to be much more.
        
         | machiaweliczny wrote:
         | What if to predict next word you need to compress/understand
         | universe in the end? AFAIK we have no clue how exactly NNs pick
         | up/structure information.
         | 
         | For me backprop and learning is similar to evolution thus
         | result might be simialar. My knowledge is very limited though
         | but I am happy to read any proofs/insights.
        
       | bryan0 wrote:
       | > Unfortunately, Turing underestimated the propensity of humans
       | to be fooled by machines. Even simple chatbots, such as Joseph
       | Weizenbaum's 1960s ersatz psychotherapist Eliza, have fooled
       | people into believing they were conversing with an understanding
       | being, even when they knew that their conversation partner was a
       | machine.
       | 
       | This is not a Turing test. Or at least not a reasonable one. A
       | reasonable Turing consists of a judge and 2 interfaces, one which
       | chats with a computer and the other which chats with a human.
       | Both the human and computer are trying to convince the judge that
       | they are the human. If the judge cannot determine which is which
       | after an open-ended conversation then the computer passes.
       | 
       | There is no reasonable judge which after chatting with a human
       | (who is actually trying to convince the judge they are human)
       | would be unable to differentiate between a human and Eliza or any
       | other chatbot out there.
        
       | hpoe wrote:
       | When it gets to questions like these I feel that we transcend
       | discussions of technology and end up on questions of philosiphy
       | which aren't going to be going anywhere anytime quick.
       | 
       | I also feel that AI should be used to augment not replace human
       | decision making, it seems that where AI shines is problems that
       | are well defined with well defined solutions, and because the AI
       | doesn't get tired, hungry or distracted it can do that really
       | well, but it fails in novel situations[0]. As such it seems to me
       | our best bet is to have the AI provide suggestions rather than
       | have complete control.
       | 
       | 0. What is meant by that is a read an article, can't find it now,
       | about using AI to diagnose breast cancer, what they found is that
       | about 90% of the time the AI could accurately check for breast
       | cancer, but the other 10% of the time was an unusual mammogram or
       | something relatively rare, and in those situations the AI would
       | often misdiagnoise.
        
         | chermi wrote:
         | AI had meaning that originated from CS theorists before being
         | usurped and losing all meaning. I think tech and associated
         | marketing (which are great, don't get me wrong, I love money!)
         | are the ones at fault here. It should be more of a
         | "philosophical" question, though I'd prefer instead perhaps it
         | be academic.
         | 
         | I'm not trying to be rude, but your example of what AI should
         | be is narrow and not very grandiose compared the original
         | meaning. I understand you were talking pretty loosely, so I
         | feel like I'm singling you out but this happened to be where I
         | started typing, sorry!
         | 
         | It just reminded of how essentially all conversations about
         | "AI" go. They seem to end up being quite specific, narrow
         | pattern recognition problems at the end of the day. Maybe
         | there's some decision theory on top of it. Maybe if there's
         | enough money /people involved, there's more components, so it's
         | a complicated enough supervised learning problem that it mimics
         | people to a sufficient extent that it looks intelligent enough
         | to make a headline. But it's a copycat, not intelligent. Hey,
         | full circle, Melanie Mitchell! -
         | https://en.m.wikipedia.org/wiki/Copycat_(software)
        
         | [deleted]
        
       | jll29 wrote:
       | As an AI professor, I've always held that machines are NOT
       | intelligent (I am prepared to change my position on the day my
       | computer asks me anything surprising that I didn't program it
       | to).
       | 
       | But this does not mean we cannot produce operational models of
       | understanding, for example we have models of
       | propositional/logical semantics and discourse such as Lambda
       | Discourse Representation Theory and others, which can compute a
       | formal representation of the meaning structures for a piece of
       | text. These have been used e.g. for answering question, and
       | working in this space has been a lot of fun, and continues to do
       | so. At the moment people talk a lot about "deep" learning (neural
       | networks with more than one hidden layer), but for such models we
       | need to do a lot more work into explainability, because it is too
       | dangerous to use black boxes in real life.
       | 
       | We still do not understand the human brain function in any
       | substantial way, and it is perhaps a greater mystery of nature
       | than even cosmology, where at least several competing theories
       | have been posed that can explain parts of the evidence.
       | 
       | How are thoughts represented (if that is answerable, it turns out
       | 'Where are thoughts represented?' has proven to be a meaningless
       | question due to the distributed nature of human memory)? What is
       | consciousness? What is a conscience? How do consciousness and
       | intention emerge from materials that are not alive and that have
       | neither consciousness nor intention? How to implement approximate
       | models? (A lot of work to do!)
        
         | jrm4 wrote:
         | Thank you. This all seems very simple to me: So-called AI is
         | mostly not different from books, movies, videogames etc. No,
         | they don't "think" and they are not "intelligent."
         | 
         | But that in no way precludes that it an instance of it being
         | mind-blowing and world-changing. Books, etc. do that, too.
        
           | tiborsaas wrote:
           | Can a book drive a car by attaching some sensors to it?
           | 
           | AI models definitely have intelligent aspects to them since
           | they perform tasks we can't write algorithms for manually in
           | a straightforward way.
           | 
           | But a movie or a book is nothing like an AI model, they don't
           | process information, they are static. AI models can react to
           | numerous inputs in various ways.
        
           | [deleted]
        
         | mcguire wrote:
         | Ah, but the question remains: Are _you_ intelligent? :-)
         | 
         | How do consciousness and intention emerge from materials that
         | _are_ alive and that have neither consciousness nor intention?
        
         | AnIdiotOnTheNet wrote:
         | > As an AI professor, I've always held that machines are NOT
         | intelligent (I am prepared to change my position on the day my
         | computer asks me anything surprising that I didn't program it
         | to).
         | 
         | While granting you your seemingly arbitrary metric for
         | 'intelligence', one is forced to wonder to what extent AI
         | systems of today are even given a means by which to ask such
         | questions if they did, in fact, have them. Take AlphaGo, for
         | instance, which can consistently beat the greatest living grand
         | masters of the game. Does it have any means of interaction by
         | which it could pose an unexpected question?
        
           | mannykannot wrote:
           | One can reasonably ask whether the techniques that produced
           | AlphaGo could produce a program that asks unexpected (yet
           | pertinent) questions. I'm not sure, but I think we can say it
           | has not been demonstrated yet.
        
           | interstice wrote:
           | Is it the case that playing the games are effectively a way
           | of asking and answering questions, specifically 'do I win if
           | I do this'?
        
             | AnIdiotOnTheNet wrote:
             | That wouldn't qualify as 'unexpected' in any way. The
             | system's ability to express any questions it might have is
             | severely curtailed by the limitations of its inputs and
             | outputs.
        
               | interstice wrote:
               | And yet the go games had some very unexpected and
               | creative plays, so if the inputs and outputs were
               | language based are we simply moving the goalposts for
               | what is considered 'unexpected'?
        
         | Kranar wrote:
         | I think your position couples intelligence with consciousness a
         | bit too tightly. I can imagine a goldfish is conscious but not
         | intelligent and I can imagine a very powerful supercomputer to
         | be intelligent but not conscious.
         | 
         | The prerequisite to change your position can be satisfied by
         | having a computer randomly generate a question, which I don't
         | think would be an example of consciousness or intelligence.
         | Furthermore even as a human (and one who is hopefully
         | intelligent), I would not go so far as to say that I'm not
         | programmed. Almost all of my opinions are programmed, the
         | language I speak didn't just fall from out of the sky but was
         | taught to me, my preferences are almost certainly due to
         | programming and I am certain if I grew up in North Korea they'd
         | be different.
         | 
         | All this to say that consciousness can be independent of
         | intelligence, and both of them can be programmed.
        
           | nescioquid wrote:
           | As a lay-person, I was persuaded by Roger Penrose's argument
           | that intelligence presupposes understanding, and that
           | understanding is not computational. I had also finished
           | reading a bunch of phenomenology before I read Penrose, so I
           | was probably primed for that sort of an argument to be
           | persuasive.
        
             | hackinthebochs wrote:
             | But why think understanding is not computational? It
             | certainly enables a lot of computational behaviors, its
             | effects can be modeled by computation. What power does
             | understanding endow a system that a priori is beyond
             | computation to capture?
        
             | mcguire wrote:
             | The difficulty there is, how do _you_ understand things?
             | "You" are mostly a chemical process, and I don't think
             | there's much non-computational about chemistry. Penrose, as
             | I understand him, answers "quantum mechanics", which I
             | think is (a) kicking the can down the road rather than
             | answering the question, and (b) problematic in its own
             | right---the experiments supporting Bell's theorem seem to
             | imply that QM and thus "understanding" are inherently
             | random, right?
        
           | servytor wrote:
           | 'Consciousness' is just a synonym for self-aware. Goldfish
           | are not self-aware.
        
             | optimalsolver wrote:
             | How do you know?
             | 
             | Another fish, the cleaner wrasse, can apparently recognise
             | itself in the mirror, hinting at some for of self-
             | awareness.
             | 
             | https://www.nationalgeographic.com/animals/article/fish-
             | clea...
        
             | Kranar wrote:
             | Stating that I can imagine X to be Y is not a statement
             | that X is Y, only that if X is Y then nothing changes about
             | my model of either X or Y.
             | 
             | That said, I'm not able to verify from a brief search for
             | consciousness and self-awareness that the two are synonyms.
             | The two seem to be related but are treated differently from
             | one another. Furthermore it's not even clear whether
             | goldfish are conscious or self-aware. Seems like it's an
             | open question.
        
             | jarpschop wrote:
             | A being is conscious if and only if it feels like something
             | to be that being. What you're referring to is self-
             | consciousness. Goldfish are most surely conscious but not
             | self-conscious.
        
         | kkoncevicius wrote:
         | > How do consciousness and intention emerge from materials that
         | are not alive and that have neither consciousness nor
         | intention?
         | 
         | I think the question shoud first be not about "how", but rather
         | - "does consciousness emerge from the materials".
         | 
         | Most cultures in the past have maintained ideas about the
         | duality between spirit and matter. Nowadays we are so adept at
         | manipulating matter we always explicitly assume that everything
         | must be material in one way or another.
         | 
         | But here is this consciousness thing that does not give in. And
         | there is no angle to it where it can ever give in. If somebody
         | creates a machine and claims it to be conscious nobody can test
         | if that is so. And further - nobody can even test wheather the
         | inventor is conscious himself.
        
           | marginalia_nu wrote:
           | It seems to me that consciousness emerges when things
           | arranged in some particular way are acted upon some
           | principle, which is true even for simple behavior, like a
           | rock falling to the ground.
           | 
           | If you put a sail on a boat, it will catch the wind and move
           | across the water. It's not the sail alone that moves the
           | boat, nor the wind, but their interplay. A sail without wind
           | doesn't get moved, and a wind without anything to catch it
           | doesn't move anything.
           | 
           | May be a bit of an unintuitive conclusion, but I think may be
           | that we are catching the consciousness like a sail catches
           | the wind, and this is the same "wind" that we call the forces
           | of nature, that we only can perceive as patterns in how
           | things around us appear to change.
        
           | filippp wrote:
           | I think neuroscience could in principle show that phenomenal
           | consciousness is an illusion, by giving a full account in
           | terms of the brain of why it seems to us that we're
           | conscious. Whether it will is another question.
        
             | Scarblac wrote:
             | There'd still be a "me" there who would be having that
             | illusion.
             | 
             | Regardless of whether we know exactly how it works or not,
             | I have a subjective experience.
        
           | Scarblac wrote:
           | We do know that if we add chemicals, say we drink some amount
           | of alcohol, that can alter our consciousness. That wouldn't
           | happen if it were completely nonphysical.
        
           | chermi wrote:
           | *Edited because I can't type*
           | 
           | I do not understand this argument.
           | 
           | I'm going use the word magic for whatever you're calling this
           | non-"materialistic" spiritual explanation. I'm not trying to
           | be a jackass, that's truly the best word I can find for it. I
           | don't think magic is a bad thing, I just don't think you
           | should summon magic until you need to.
           | 
           | Magic is not an explanation, it's a non-explanation. It's
           | non-falsifiable (blah-bity Popper, yeah yeah).
           | 
           | Our adeptness at manipulating matter is not the origin of
           | wanting (at least mine) to find a 'materialistic' explanation
           | for something. It's the desire to have a predictive (read,
           | useful) model of that thing, even if the model is not
           | complete. Once you poke deep enough at any model you will
           | find that there is some hand-wavy "magic" at the bottom. That
           | is, there is some part that we don't know yet, either because
           | we can't do the math, because we can't measure due to
           | difficulty of experiment, or because of fundamental bounds
           | where maybe you might be justified in starting to invoke some
           | magic.
           | 
           | Despite best selling books about some quantum mysticism
           | bullshit, I have yet to see an inkling of evidence for
           | invoking magic for consciousness. From a physics perspective,
           | biological systems are freaking hard, we have no reason to
           | expect them to crack easily. And that's without us
           | (physicists) understanding nearly enough about biology!
        
             | simplestats wrote:
             | I don't think this line of dismissal necessarily fits. If
             | we ended up having to expand physics to include some new
             | (today seen as mystical) property of the universe, then you
             | are essentially wrong here. With QM (which I doubt is
             | related to consciousness) we had to add unmeasurable
             | quantum phase to everything and accept certain
             | previously-"nonphysical" new properties of systems.
             | Similarly with other revolutionaly theories.
             | 
             | Magic can be disproven by explaining the phemonen with a
             | conventional physical theory.
        
               | chermi wrote:
               | I don't think I follow? What am I dismissing?
               | 
               | I don't quite understand QM analogy. QM arose to explain
               | new observation from more detailed measurements.
               | Consciousness and intelligence and all things brain have
               | been (to an extent) known 'properties' and difficult to
               | explain (from first-principles) for quite some time. The
               | more we probe, we answer some questions, and in the
               | process reveal some more detail, leading to more detailed
               | questions.
               | 
               | My qualm is that nowhere along this trajectory have I
               | ever seen any need to invoke magic. It's a complex, many-
               | body system with lots of noise, many relevant interacting
               | length and time-scales that are difficult to cleanly
               | separate, model, observe and probe. It's hard, but it
               | seems we're making progress. Maybe it will take CERN,
               | LIGO or ISS scale endeavors, I don't know.
               | 
               | It's not all similar, IMO, to how QM arose.
               | 
               | But I'm not a neuroscientist nor a historian of science.
        
             | kkoncevicius wrote:
             | > I do not understand this argument.
             | 
             | The unfortunate thing is that, I think, I understand your
             | side. But something happened, I read some book or heard
             | some talk that placed a seed of doubt in my mind. At first
             | it was just a seed and I oscillated back and forth between
             | being 100% materialist and between entertaining the
             | proposition that there is something beyond matter in this
             | world. But as time went on I began to shift towards the
             | "non material" interpretation for consciousness more and
             | more.
             | 
             | It's not an explanation, I agree. But there is nothing to
             | explain. Me claiming that consciousness might not arise
             | from matter is not an attempt to explain it. I just see no
             | way how it can be investigated in material terms. If you
             | see a person on the street there is no way you can tell if
             | he or she is conscious. And I don't see a possibility of
             | there ever being a way.
             | 
             | Sadly, I don't know what started this doubt in me, so I
             | cannot share it with you. You brought physicists for some
             | reason, and I know a few, like Schrodinger, who thought
             | about consciousness and came to the same conclusion. Here
             | is his quote: "Consciousness cannot be accounted for in
             | physical terms. For consciousness is absolutely
             | fundamental. It cannot be accounted for in terms of
             | anything else." I haven't read about Max Plank, but heard
             | he had similar views.
             | 
             | If I had to guess my starting point was a book called "the
             | problems of philosophy" by Bertnard Russell [1]. He tries
             | to answer the question "are there any statements to which
             | all reasonable men would agree". And one of the conclusions
             | the book comes to is that you cannot claim anything as
             | objective without assumptions, and that the most objective
             | thing is your subjective experience.
             | 
             | For example if you saw a cat, turned your head, and then
             | looked back at the cat - was the cat there when you were
             | not looking or was it gone? In the book Russell
             | convincingly demonstrates that if someone maintains the cat
             | was gone when you were not looking - you cannot prove
             | logically to that person that he is wrong. In other words -
             | you would not be able to start with his worldview and lead
             | him to contradictions. Hence his "theory" about cat
             | disappearing cannot be disproven without assumptions about
             | how material objects behave.
             | 
             | Not sure if this is helpful, but I wanted to reply.
             | 
             | [1]: https://www.gutenberg.org/files/5827/5827-h/5827-h.htm
        
               | chermi wrote:
               | Thanks for the thoughtful reply. I'm leaving this mostly
               | to acknowledge it and mark so I can come back later with
               | a more thorough response.
               | 
               | I don't think we strictly disagree.
               | 
               | For one, I agree that, fundamentally, as the observers,
               | everything is "filtered" through us whether we like it or
               | not. So there's no getting around some notion of
               | 'subjectivity' or 'observer' bias, at least not that I
               | know about. But that is not at all exclusive to an
               | improved (predictive, quantitative, model-based)
               | understanding of the brain and consciousness. Look how
               | much progress we've made!
               | 
               | BTW I'm on my phone and forget why we're talking
               | specifically of consciousness and not AI? Is that my
               | fault? My bad.
               | 
               | Oh, I brought physicists so my perspective was clear and
               | you could understand my bias and my ignorance of actual
               | neuroscience and CS and biology.
        
               | [deleted]
        
       | jonplackett wrote:
       | Highly recommend A a thousand Brains by Jeff Hawkins if you're
       | interested in this
       | 
       | https://numenta.com/blog/2019/01/16/the-thousand-brains-theo...
       | 
       | He basically argues we all have 1000s of little, but interacting
       | models of all sorts of things going on in our brain all at the
       | same time. He calls them reference frames and it's those that
       | create intelligence.
       | 
       | 'Understanding' would come naturally out of having those.
       | 
       | Fascinating book which I'm probably explaining much less well
       | that he does.
        
         | andyjohnson0 wrote:
         | I've not read the book, but from your description it maybe has
         | some similarity to Marvin Minsky's "Society of Mind". Mind and
         | intelligence as the emergent behaviour of many
         | cooperating/competing systems.
        
           | jonplackett wrote:
           | yes i think he talks about that. The difference is he's also
           | looking for (and finding) the 'how' in physical brain
           | structure terms, rather than just theory alone.
        
       | continuational wrote:
       | This is a great quote:
       | 
       | > When AI can't determine what 'it' refers to in a sentence, it's
       | hard to believe that it will take over the world.
        
       | swframe2 wrote:
       | A software and hardware tool that can design and build another
       | software and/or hardware tool to determine the cause and effect
       | rules of any real world activity.
        
       | carapace wrote:
       | Prediction.
       | 
       | Setting aside the metaphysical questions of subjective "meaning"
       | and "understanding" what else is there?
       | 
       | What a system can predict is the measure of its "understanding",
       | surely?
        
         | idiotsecant wrote:
         | Does a mechanical bomb sight 'understand' ballistics and
         | aerodynamics?
        
           | carapace wrote:
           | By this definition, yes, of course. That's a great example.
        
       | vbphprubyjsgo wrote:
       | Nothing. Once it has enough neurons it's no different than a
       | human brain in terms of what it can understand. Once it has more
       | it can understand more.
        
       | tomthe wrote:
       | My understanding of "Understanding":
       | 
       | Imagine a photo of a written poem:
       | 
       | An image-processing program can "understand" the digital image:
       | It can read the jpg, change the picture completely (e.g.: change
       | the colors slightly), without changing the meaning one level up.
       | But it doesn't understand the characters or words.
       | 
       | An OCR program can read the image and "understand" the characters
       | (or the textual representation. It can change the representation
       | completely (save it as UTF-8 or whatever) without changing the
       | meaning one level up. But it doesn't understand the language.
       | 
       | GPT-3.... well, let't go directly to humans
       | 
       | A human can read the text and understand the words and understand
       | their meanings and what the sentences say. Another human _really_
       | understands the poem and the subtext another level up.
       | 
       | I think understanding always works on different levels and is a
       | part of communication.
        
         | pinouchon wrote:
         | Interestingly, the poet with the highest level understanding
         | can be clueless about UTF-8 or what a pixel is
        
         | criddell wrote:
         | Sounds like you are describing the treachery of images.
         | 
         | Ceci n'est pas un poeme.
        
       | 6gvONxR4sf7o wrote:
       | Good article. I think winograd/winogrande are super clever (and
       | also kind of a "fun" idea).
       | 
       | My personal take is that blanket understanding is too hard of a
       | task to define, so we ought to cheat and talk about types of
       | understanding. In my mind, understanding a thing means not only
       | that you can answer, but also that you can justify your answer.
       | So different kinds of understanding point to different kinds of
       | justifications.
       | 
       | In math classes, you'll be asked not only to state whether a
       | thing is true or false, but also to _show_ that it's true or
       | false, giving an answer as well as a proof. In a literature
       | class, you don't just make a point, you also have to support it
       | in natural language. Same with science classes, supporting things
       | with data and logic.
       | 
       | The closest we have right now in ML (widely) is statistical
       | measurements on holdout data. It's 99% accurate on other things,
       | so it's 99% likely to be right on this too, if this was sampled
       | from something just like the holdout. We also (less widely) have
       | a little cottage industry of post-hoc explanations that try to
       | explain models predictions.
       | 
       | I'd love to see models that can do better than post-hoc
       | explanations. I want a model that "understands in terms of
       | predicate logic" that can spit out an answer with a checkable
       | proof. Or "understands in terms of a knowledge graph" or
       | "understands in terms of a set of human anatomy facts" or
       | "understands in terms of natural language arguments" or any
       | number of things that can spit out an answer as well as a
       | justification.
       | 
       | Just asking for blanket understanding means we have to define
       | blanket understanding, but there's a lot of limited understanding
       | that's still better than what we have today.
        
         | rdedev wrote:
         | The other good thing with requiring proofs is that you can show
         | specify exactly where the proof fails and the one giving the
         | proof can use this feedback to correct their beliefs.
         | 
         | I would actually be pretty okay with a model that can do that.
         | Take feedback and correct themselves without resorting to
         | retraining. It would still be pretty far away from a general AI
         | but much much better than a black box.
        
         | harperlee wrote:
         | Exactly, understanding something requires holding a model (or
         | several) of the thing in the mind and be able to work with that
         | model to some extent. That's why we dont see statistical tricks
         | as understanding - as soon as trivial errors show that
         | operation in those models is not feasible, we reject it. And
         | that's why winograd works: it establishes a simmetry that
         | requires a base semantic model outside of the provided text.
        
       | stakkur wrote:
       | For AI to be truly artificial intelligence, shouldn't it be able
       | to define when it understands?
        
       | typon wrote:
       | I've come to believe that the answer will be provided by the
       | "free market". When AIs start to replace humans for tasks that
       | require "intelligence" (how we currently define it for humans),
       | then AI will have achieved "understanding". Sure there will be
       | hype driven companies that replace humans with AI for PR
       | purposes, but eventually those will flatten out. Once I start
       | receiving phone calls from AI telemarketers, have a decent
       | conversation with them and can't tell that I just talked to an
       | AI, then AI will have achieved understanding. And so on for other
       | domains in every day life.
        
       | jonplackett wrote:
       | Anyone who has kids and teaches them things will know AI learns
       | and 'understands' very differently.
       | 
       | I can say something like 'a tiger is just a lion with stripes' to
       | a 3 year old and they now 'understand' what a tiger is almost as
       | well as if they saw a picture of one. They could definitely
       | identify one from a picture now.
       | 
       | This kind of understanding won't work with an AI because we don't
       | understand what characteristics it has latched into when
       | identifying a lion. For all we know it's that the background of
       | each lion image its been trained on has a blue sky. Or the tigers
       | are all looking at the camera.
       | 
       | I think the ability to pick apart what you know and learn new
       | things by reasoning about that knowledge is the key to if
       | understanding is taking place.
        
         | yboris wrote:
         | I'm curious, when you say "This kind of understanding won't
         | work with an AI", do you mean currently, or in principle, even
         | in the future?
         | 
         | note: children's brains come pre-loaded with so much stuff when
         | we are born (we are not "blank slates").
        
           | jonplackett wrote:
           | What I mean is AIs aren't really built with the goal of
           | 'understanding' anything currently. They are awesome at
           | individual tasks but they don't have the kind of common sense
           | a person can use to reason with and build up an understanding
           | of how pieces of knowledge fit together.
           | 
           | Eg. Driverless cars can identify a car, or a motorbike or a
           | cyclist and maybe work out a trajectory for it. But they
           | don't understand that a bike is a person on top of a metal
           | frame and that person is made up of a head and body and
           | limbs. And if that head is looking away from them it can't
           | see them coming.
           | 
           | For me, that's understanding. Deconstructing and
           | reconstructing knowledge to come to conclusions that add to
           | your knowledge.
        
             | yboris wrote:
             | Thank you for the response. A bit of a push back: On the
             | driverless cars example you're right that AI models
             | abstract away needless complexity, but so do we, as humans,
             | can live without understanding that bodies are made up of
             | cells (or how muscles move during a tennis match). If we
             | want AI models to include a person's gaze in their
             | calculations, we can make it happen. From an abstract-
             | enough perspective, the AI model will do enough that
             | whether it "understands" may become irrelevant.
             | 
             | Language understanding is much harder, because words are
             | _about_ stuff in the world. Just like when a toddler says
             | "love" we know they don't fully understand what they mean,
             | AI won't have the capacity to _mean_ "love" unless it has a
             | lot more it "understands" along the way. But it feels like
             | it could in the near few decades "understand" enough about
             | "duck" to mean it when it says "I see a duck".
        
         | addsidewalk wrote:
         | Pick apart is what they're doing.
         | 
         | The systems I've worked in immediately abstract strings, shapes
         | in images, etc, into the mathematical shape and gaps between
         | edges.
         | 
         | If you dig into an arbitrary array in a variety of places, the
         | fields contains coordinates, not "Hi Mom, kids are ok, blah
         | blah".
         | 
         | It's measuring the white space in a thing, where everything but
         | the feature you're currently interested in is white space;
         | what's between the features I want?
         | 
         | Then comparing that to results of other data structures that
         | had the same white space measuring applied.
         | 
         | Does it not do what you said you do you not want to believe it?
         | 
         | I think the issue is the companies being incredibly
         | disingenuous about how this all works.
         | 
         | At the root is elementary information theory:
         | https://www.amazon.com/dp/0486240614
         | 
         | Formal language is 5,000 years old. Human intuition for
         | quantitative assessment of hunger, warmth, supply stocks, tool
         | building, etc is much older. IMO human language is noise
         | obscuring obviousness. It's the desktop metaphor of cognition.
         | "Please internalize my language versus observe for yourself."
        
         | bpizzi wrote:
         | > I can say something like 'a tiger is just a lion with
         | stripes' to a 3 year old and they now 'understand' what a tiger
         | is almost as well as if they saw a picture of one. They could
         | definitely identify one from a picture now.
         | 
         | Assuming the 3 year old already knew what a lion looks like,
         | and point at 'things with stripes' and 'things without
         | stripes'.
         | 
         | I think that a model that can already recognize separately
         | lions and stripes should be able to tag a tiger's picture as a
         | 'Lion with stripes', no?
        
           | jonplackett wrote:
           | Maybe... but this is just one very easy example and also
           | using something very obvious and visual.
           | 
           | I could also say "a Cheetah is like a lion but it's smaller
           | and has spots and runs a lot faster. And a leopard is like a
           | lion but smaller and can climb trees and has spots."
           | 
           | I could probably start with a house cat and describe an
           | elephant if I wanted to and I'll bet the kid would work it
           | out.
           | 
           | The ability to take apart and reassemble knowledge is what
           | I'm talking about here, not just add two simple bits of
           | information together.
        
             | justinpombrio wrote:
             | > I could also say "a Cheetah is like a lion but it's
             | smaller and has spots and runs a lot faster. And a leopard
             | is like a lion but smaller and can climb trees and has
             | spots."
             | 
             | The OpenAI website is unresponsive at the moment, so I
             | can't _actually_ demonstrate this, but you could totally
             | tell GPT-3 that, and it would then make basic inferences.
             | For example, saying  "four" when asked how many legs a
             | cheetah has, or guessing a smaller weight for a cheetah
             | than a lion when asked to guess a specific weight for both.
             | Not perfectly, but a lot better than chance, for the basic
             | inferences.
             | 
             | (You wouldn't actually tell it "a Cheetah is like a lion
             | but..." because it already knows what a Cheetah is. Instead
             | you'd say "a Whargib is like a lion but ...", and ask it
             | basic questions about Whargibs.)
        
           | mannykannot wrote:
           | Not necessarily, if the stripes prevent tigers from scoring
           | highly on the lion measure.
           | 
           | Generalizing from what is formally insufficient information
           | is something that humans are quite good at (though obviously
           | not infallibly.)
        
         | ShamelessC wrote:
         | I've done a fair bit of work with multimodal deep learning and
         | I am fairly confident that a DALL-E, CLIP, or NUWA architecture
         | would output/classify those phrases accurately without being
         | trained explicitly on images of Tigers.
         | 
         | I see your point however.
        
           | jonplackett wrote:
           | I'd be interested in where to look for more info on that.
           | 
           | Do you think it could work on anything more complex?
           | 
           | As I said in comment below I reckon I could make a much more
           | elaborate explanation and still have the kid get it.
        
             | ShamelessC wrote:
             | https://openai.com/blog/dall-e/ shows a decent ability to
             | generalize to previously unseen concepts.
             | 
             | You're correct to consider the complexity of the phrase and
             | just how good humans are at this sort of thing without
             | needing much "training". For now, concepts that aren't
             | explicitly in the training set are effectively composed
             | from those which are. This can lead to some bizarre and
             | outright incorrect results, particularly when it comes to
             | counting objects in a scene or with relative positioning
             | between objects (e.g. a blue box on top of a red rectangle
             | to the left of a green triangle) but it's early days and
             | there's lots of progress happening all the time.
        
               | jonplackett wrote:
               | Thanks. this is interesting. In a way it's opposite
               | thought-direction of what we are talking about.
               | 
               | eg. can it look at an avocado shaped chair and recognise
               | it as a chair in the shape of an avocado - for me that
               | would display a lot more understanding of the concept of
               | 'chair' and 'avocado' than being able to produce an image
               | of the phrase 'a chair shaped like an avocado' - but
               | maybe the same process must be happening in there
               | somewhere to make this possible? What do you think?
        
               | ShamelessC wrote:
               | https://openai.com/blog/clip/ CLIP is the corrolary model
               | created purely for classification purposes rather than
               | generative as in DALL-E and is quite impressive across a
               | range of tasks. Give it an image and a caption, and in
               | return you get a score (0.0 to 1.0) telling you how much
               | they match.
               | 
               | I think it is more in line with your premise. Others have
               | taken CLIP and combined it with frozen language models
               | (GPT2) to create automatic captioning models that are
               | very impressive.
               | 
               | edit:
               | 
               | To try to address your question about whether or not
               | actual semantic composition occurs, I think the answer is
               | "yes" but it would be challenging to convince you this is
               | true without going into details of the "self attention"
               | mechanism which allows both methods to work. The short
               | version is that these networks are able to find meaning
               | in extremely high-dimensional problems by having a
               | mechanism specifically tasked with learning positional
               | statistics of the training data. In language this refers
               | to e.g. how often the word "pillow" is directly next to
               | the word "fort". In vision, this similarly refers to how
               | often e.g. trees are positioned next to gift-wrapped
               | presents.
               | 
               | That's quite simplified but I hope that makes sense to a
               | degree!
        
       | temptemptemp111 wrote:
       | It doesn't mean anything; people just don't understand what
       | 'concepts' are anymore because they're so delusional. For AI to
       | understand something means its human inventor / implementer
       | understood it, potentially. The human understood a concept - not
       | the actual thing - and that concept is what you call immaterial.
       | You can point to code or output, but that is related to the
       | concept. "Understanding" is when you stand under a concept -
       | though you could always step out from under it in the case that
       | you lose your understanding or it does not apply - etc. This
       | inability for people to think is becoming hilarious. The
       | metaverse is going to prevail for these damaged people but means
       | nothing to those living in the real world.
        
       | sabhiram wrote:
       | What does it mean for a child to understand? For a baby? A dog? A
       | command line application?
       | 
       | Understanding implies comprehension of some input, to influence a
       | future state. Surely my stateful database understands requests
       | that come to it. It, however will never surprise me with behavior
       | that I would (should) not expect. I suppose if you "understood"
       | the human machine and mind well enough, it would be possible to
       | predict the actions it will carry out.
        
       | hans1729 wrote:
       | Mandatory reference to Robert Miles' content on AI safety
       | problems, alignment etc.:
       | https://www.youtube.com/c/RobertMilesAI/playlists
        
       | mwattsun wrote:
       | I've been using Visual Studio 2022 recently which has AI driven
       | code prediction built into it. Sometimes it predicts what the
       | next thing I will type will be and I merely have to hit tab once
       | or twice to accept it. At no point am I tempted to think Visual
       | Studio understands my code, because it's just code itself.
       | 
       | The first time I played a chess game was back in the early
       | 1980's. While it beat me I felt an eerie "presence" in the
       | machine that was sentient. I didn't know then about chess code so
       | it was easier for me to anthropomorphize the machine (but it was
       | the main reason I became interested in computers.)
       | 
       | A computer "understanding" the difference between "how do I melt
       | a block of ice" and "how do I melt my lover's icy heart" would be
       | looking at the context and the relation of the words to each
       | other. The computer might also predict I was sad if I asked the
       | latter question. If I were a non-technical user I might think the
       | computer felt empathy and be amazed by it.
       | 
       | If I came upon a computer that "understands" I would want to
       | determine if it understands like Richard Feynman or if it
       | understands like my dog. My dog operates on a limited set of
       | patterns, so that seems doable, but on the other hand, I've seen
       | videos and heard stories of dogs exhibiting inventive and
       | creative behavior that is unexpected. One such case is dogs that
       | get lost and manage to find their way home thousands of miles
       | away.
       | 
       | tldr; I'm jaded. I know it's buggy code all the way down with
       | computers.
        
       | chrischapman wrote:
       | Machines don't learn. Living things learn. Machines don't
       | understand. Living things understand. Machines 'do' algorithms.
       | Living things 'do' use-cases.
       | 
       | Algorithm Definition:
       | 
       | A sequence of actions that yields a result.
       | 
       | Use-case Definition:
       | 
       | A sequence of actions that yields a result of value to a user.
       | 
       | The difference between the two is 'of value to a user'. To me,
       | the line between algorithm and use-case is the line between
       | unconsciousness and consciousness. That line pivots around the
       | ability to 'value' something. I doubt we will ever see the I in
       | AI until we build something that can value a result in the same
       | way that you and I do.
       | 
       | We need new words to describe what machines do. Using 'learn' or
       | 'understand' seems like anthropomorphism. It's weird that we
       | glibly anthropomorphise when talking about machines but prohibit
       | it when talking about living things. Almost all of our qualities
       | have been inherited from other living things. It seems to me that
       | we should _always_ anthropomorphise when talking about living
       | things and _never_ anthropomorphise when talking about machines.
       | And yet, we always seem to do the opposite.
       | 
       | Until we can explain how a machine can 'value' something in the
       | same sense that humans do (or chimps or ducks or caterpillars
       | do), we should avoid anthropomorphic words like 'learn' and
       | 'understand' as it misdirects our efforts. However, I have no
       | idea what else to suggest other than to try to explain how a
       | machine can 'value' something.
       | 
       | > IBM's Watson was found to propose "multiple examples of unsafe
       | and incorrect treatment recommendations."
       | 
       | That won't stop until Watson has the ability to 'value' the
       | result of what it is doing. Watson 'does' algorithms. Watson
       | needs to 'do' use-cases. Once it can 'value' a result in the same
       | way we do, it will correct its mistakes.
        
         | JoeAltmaier wrote:
         | Sounds like a bunch of unsubstantiated claims? So if something
         | does learn, then its a living thing? My AI can learn (see a
         | pattern and repeat it); now its a living thing? Not sure what
         | to do with that.
        
           | chrischapman wrote:
           | Yep. Totally unsubstantiated. Back of the envelope theory at
           | best.
           | 
           | > learn (see a pattern and repeat it)
           | 
           | How did your AI 'see' and is 'see a pattern and repeat it' a
           | good enough definition of 'learn'? Surely to 'learn' also
           | means to 'understand'. What did your AI 'understand'? I doubt
           | your AI actually learned anything. It attained zero knowledge
           | in the way a living thing obtains knowledge. It may have
           | stored a result in a database but did it actually understand
           | anything?
           | 
           | These are genuine questions as I have no professional
           | knowledge of AI.
        
         | md2020 wrote:
         | This just sounds like you defined "value" to mean "something
         | only living things do and that machines don't do", and then
         | said the reason machines can't learn is because they can't
         | value. Seems like circular reasoning. I think if you're putting
         | humans, chimps, ducks, and caterpillars in the category "can
         | value", machines still belong on that axis. They're far below
         | caterpillar for now, but they're there.
        
           | chrischapman wrote:
           | The ability to 'value a result' seems to me to be linked to
           | consciousness. How do machines belong to that axis? Do you
           | really think machines can value something in the same way you
           | and I do? I would assume they can't (and may never). You
           | could probably code an algorithm that simulates 'valuing a
           | result', but I'm sceptical that the machine would actually
           | value the result in the way you and I would. If it did, that
           | would be astonishing as it would indicate (to me) that it's
           | alive!
        
       | tested23 wrote:
       | What does it mean for humans to understand? There are many times
       | in the past where i thought i understood something and then i
       | grow older and i see the holes.
        
         | PeterisP wrote:
         | One definition of "understand" would be "have or obtain an
         | internal model of Thing-or-process-to-be-understood which is
         | close enough to reality that it allows you to reasonably
         | predict what will happen and make effective decisions regarding
         | that thing". It does not have to be a _perfect_ model - if it
         | would, then I 'll be the first to say that I don't understand
         | anything according to that definition, but it's a bit more
         | tricky than it sounds on the surface. For example, for a self-
         | driving car, "understanding pedestrians" according to this
         | definition does require an ability to predict how they will
         | behave and thus "know" what factors affect that - that the
         | likelihood of a kid suddenly springing towards the middle of
         | the road is highly dependent on the presence of a ball or a pet
         | in that direction; that certain wobbly and jagged movements are
         | indicators that the person might behave in a less predictable
         | manner than the average person, etc, etc; and if a system
         | _does_ have this practical knowledge (measured by how well it
         | is able to effectively apply it for its goals) then I 'd say
         | that it does have some understanding.
        
         | Verdex wrote:
         | Yeah, I feel this question is important to understand before we
         | worry about what it means for the AI to understand.
         | 
         | My thought is that we've got three types of "understanding":
         | 1) social understanding       2) intuitive understanding
         | 3) structural understanding
         | 
         | Social understanding is something the society we live in knows,
         | but the individual only knows in so far as the individual is
         | doing something to fit in or via peer pressure. So for example,
         | some high latitude countries eat fish for breakfast. Supposedly
         | the statistics show that this helps them be more healthy than
         | countries at similar latitudes which do not eat fish for
         | breakfast ... probably because of problems due to lack of
         | vitamin D due to lack of sunlight for certain parts of the year
         | (the fish oil helps with this). However nobody actually "knows"
         | this. They just eat fish because everyone else eats fish.
         | 
         | Intuitive understanding is anything where we start to use
         | flowery language like "experience" or "gut". You're really good
         | at it, but just giving someone a flow chart isn't good enough.
         | They have to have gone through the experience themselves.
         | Driving is a good example. We make people take a test, but if
         | just giving them diagrams and rules was good enough, then we
         | wouldn't need a test where you actually drive and requirements
         | about a certain number of hours of supervised driving.
         | 
         | Structural understanding is anything that can be put to rules.
         | So there's a lot of mathematics and algorithm stuff here. A
         | simple example might be playing tic tac toe. The game is simple
         | enough that you can write down a few rules that allow you to
         | never lose.
         | 
         | EDIT:
         | 
         | My categories don't really answer the question, but they do
         | give profiles and categories to look out for.
         | 
         | Social understanding is good because it statistically learns to
         | avoid lethal pitfalls. Like, if there's a dangerous well in the
         | forest that people fall down and die in. A society might start
         | telling people to not go in the forest because other people go
         | in there and die. However, the society doesn't know why this is
         | good advice.
         | 
         | Intuitive understanding is good because it allows you to
         | quickly statistically learn how to deal with imperfect and
         | chaotic systems while getting good results.
         | 
         | Structural understanding is good because it allows you to break
         | free of the statistics of the previous two understandings. You
         | can get exact results. Also it lets you break free of issues
         | that come from distantly causal action + consequence. A
         | person's intuition might not tell them that dumping toxic waste
         | into the water is a good idea because things don't go bad until
         | a lot of waste has already been dumped. Similarly a society
         | might make a similar judgement if the failure is far enough
         | away from the actions that kick it off. However, if you
         | understand the structural relationships between things then
         | you'll have an idea that toxic waste should not be consumed.
        
           | prometheus76 wrote:
           | I would use "experiential understanding" instead of
           | "intuitive understanding", but I think we mean the same
           | thing. I am not sure I agree with your hierarchy, however. I
           | would rather have an experiential understanding of marital
           | arts if I was faced with a would-be attacker than I would
           | have a "structural understanding" as you put it. In other
           | words, for many domains of interaction with the world, an
           | experiential knowledge is far superior to a "structural" or
           | as I understood what you were saying a "propositional"
           | understanding of a topic or subject.
           | 
           | Here's another way of putting what I'm saying: when we want
           | to learn about a tree, in the West, our first inclination is
           | to cut it down, categorize/classify the parts, and count the
           | rings. We think we know what a "tree" is at that point. In
           | the East (and I'm learning this perspective from Eastern
           | Orthodox Christianity), if you want to learn about a tree,
           | you plant one. Maybe more than one. Nurture it. Prune it.
           | Fertilize it. Watch it grow. Watch it change with the
           | seasons. Build a treehouse in it for your kids. Watch your
           | daughter get married in the shade of the tree. In other
           | words, instead of dissecting something (which kills the thing
           | itself) in order to categorically "understand" something
           | propositionally, in the East, they focus on having a
           | relationship with something in order to understand it.
        
             | Verdex wrote:
             | It's not a hierarchy, it's just a list. Structural isn't
             | meaningfully better than anything else. It just "works" for
             | different reasons.
             | 
             | Intuitive is often faster to react and faster to get off
             | the ground and producing results. So in a fight intuition
             | is probably going to be better. That being said, supposedly
             | the boxing fight that the movie 'cinderella man' was based
             | off of involved Braddock analyzing Baer's fighting style
             | and figuring out some foot work that kept him from getting
             | pummeled. There's no reason that structural, intuitive, and
             | social understanding can't all work together to get a
             | result.
        
               | prometheus76 wrote:
               | I misunderstood what you said as a hierarchy because of
               | how you worded your last paragraph, but I would agree
               | with you that synthesizing the different types of
               | knowledge is the best way to interact with the world.
        
       ___________________________________________________________________
       (page generated 2021-12-16 23:01 UTC)