[HN Gopher] Some AI Systems May Be Impossible to Compute
       ___________________________________________________________________
        
       Some AI Systems May Be Impossible to Compute
        
       Author : niccl
       Score  : 12 points
       Date   : 2022-03-31 19:53 UTC (3 hours ago)
        
 (HTM) web link (spectrum.ieee.org)
 (TXT) w3m dump (spectrum.ieee.org)
        
       | kylehotchkiss wrote:
       | I'm not going to pretend to be especially knowledgeable in
       | AI/Machine Learning/Neural Nets but I read something a few years
       | ago that helped my perception of the limits of that tech: AI is
       | not capable of explaining why it makes a decision that it does.
       | This was a big flag to me in how much I trust things like full
       | self driving - a car is making decisions with a trained cloud of
       | intelligence that can't explain itself. I'm happy to see
       | scientists and mathematicians trying to find the limits of what
       | the tech can do, it's interesting to model it after how a human
       | mind works, but we don't even understand human minds well enough
       | to try to replicate how they learn intelligence. The world is
       | closer to continuous numbers not discrete numbers etc. AI is
       | still interesting enough to keep building on, Github's Copilot
       | has saved me time just yesterday with very helpful autocompletes
       | on some refactoring I was doing. More of that please - but let me
       | be in the captains seat, deciding whether I want to accept the
       | AI's decision or not.
       | 
       | I wish companies like Tesla would accept this and shift some of
       | the self driving budget to better battery tech. That's the need
       | of the hour with fuel prices rising like they are and global
       | warming looming over us.
        
         | YeGoblynQueenne wrote:
         | >> AI is not capable of explaining why it makes a decision that
         | it does.
         | 
         | That's not true for "AI". It's true for a particular kind of AI
         | system, which are collectively known as "black box" approaches.
         | Deep neural nets for example, are a "black box" approach
         | because they can't explain their decisions, as you say.
         | 
         | There are other AI approachs besides deep learning. Recently, a
         | system based on Inductive Logic Programming, a form of machine
         | learning for logic programs, beat 8 human champions in the card
         | game of Bridge:
         | 
         | https://www.theguardian.com/technology/2022/mar/29/artificia...
         | 
         | In Bridge, players must be able to explain their plays to their
         | opponent, and the AI Bridge player in the article above, Nook,
         | was specifically designed to have this ability _and_ play
         | better than human champions.
         | 
         | Btw, lest this is perpetually misunderstood:
         | 
         | AI [?] machine learning [?] neural networks [?] deep learning
        
         | jameshart wrote:
         | Humans are also not able to explain why they do what they do.
         | Yet they are somehow able to drive cars.
        
           | debdut wrote:
           | The point is that they can explain why they are taking a left
           | turn!
        
             | jameshart wrote:
             | Can they?
             | 
             | How exactly does your brain do route planning?
             | 
             | How do you pick which lane to turn down in the parking lot
             | when looking for a space?
             | 
             | Why did you get off at exit 14?
             | 
             | Wait, this isn't even your turn, why did you go left here?
             | 
             | I mean, when you get down to it, why are you even driving
             | to this deadend job?
             | 
             | But yeah, sure, humans can 'explain' their behavior, which
             | is why we can trust them.
        
               | YeGoblynQueenne wrote:
               | >> How exactly does your brain do route planning?
               | 
               | That is not the kind of explanation that is needed in AI
               | systems. When people talk about "explainable AI", they
               | literally just mean systems that can answer the kind of
               | question that a human would be able to answer.
               | 
               | That's because a question that a human cannot answer is
               | very likely to have an answer that a human will either
               | not be able to understand, or will have to work very hard
               | to understand... which is no better than no explanation
        
               | jameshart wrote:
               | I think what people are looking for in 'explainable AI'
               | is: when the AI makes a bad decision, they want to be
               | able to look into the neural network and say 'there: that
               | neuron being set to that value is what made the AI
               | mistake the cyclist for a drop kerb'. Then we can fix the
               | value and the AI will not make that mistake again.
               | 
               | But when an AI gets sufficiently complex of course there
               | won't be explanations that make sense for those kinds of
               | errors, because just like a human the AI is integrating
               | lots of different bits of information that it has learned
               | are important and it has limited capacity and sometimes
               | it just gets its attention focused on the wrong thing and
               | it just didn't see the guy, okay?
               | 
               | Demanding that AI be explainable is fundamentally
               | demanding that it not be intelligent.
        
               | YeGoblynQueenne wrote:
               | Just to be clear, a neural network is not "an AI". "AI"
               | is the name of the research field. We don't have "AIs" as
               | in Science Fiction yet, and neural networks don't do
               | anything "just like a human". When people in AI research
               | talk about "attention" in neural networks, that's just an
               | anthropomorphic, and quite unfortunate, name for a
               | specific technique in training neural nets. It doesn't
               | mean that a machine vision system has the ability to
               | focus its attention in the same way that humans do.
               | 
               | That out of the way, there are AI approaches that can
               | explain their actions just fine without going dumb. For
               | example, I posted this comment earlier:
               | 
               | https://news.ycombinator.com/item?id=30872400
               | 
               | about an AI system called Nook that recently won a
               | tournament against 8 human champions of the card game
               | Bridge. In Bridge, players must be able to explain their
               | moves, so an AI player without the ability to explain its
               | decisions can't play a full game of Bridge.
        
               | neatze wrote:
               | Well humans can explain critical aspect of what they
               | doing while teaching other humans, furthermore they can
               | improve explaining while teaching to be more effective in
               | real time in current context, such as using analogies,
               | stories, etc.
        
         | henriquecm8 wrote:
         | > AI is not capable of explaining why it makes a decision that
         | it does. I know it's different, we also don't always know why
         | make certain choices. We just pick one and say something to
         | reassure ourselves. "I have a good feeling". A real AI will
         | need to this type of "intuition".
        
         | embwbam wrote:
         | Think of it like human intuition. We haven't developed left-
         | brain logic AI yet, but we are getting closer to training AI to
         | outpace a trained human. They can look at a picture and say
         | "That's a dog". That's an intuitive thought, not a logical one.
         | A car could say "this situation feels dangerous", and ask the
         | drive to take over, or it could just react and steer
         | intuitively. It might not be able to reflect on its actions
         | yet, but that doesn't mean it couldn't learn to drive.
         | 
         | Not that I think we WILL get to full self driving any time
         | soon, but the car not being able to explain itself doesn't
         | prevent us from getting there.
        
           | visarga wrote:
           | I don't think anyone does video to steering wheel in one
           | network. But they do create a 3d model of the environment
           | from sensors and do planning on top of that. So there is the
           | explanation - what the car thought it saw.
        
         | TaylorAlexander wrote:
         | > AI is not capable of explaining why it makes a decision that
         | it does.
         | 
         | So, yes this is true, but also not the full story. At this
         | point we don't have neural networks that are also capable of
         | explaining their reasoning, but what we CAN do is do a lot of
         | introspection with that network. There is an entire field
         | called AI Explainability that seeks to probe the network in
         | various ways to help humans understand what is happening.
         | Remember that you have total control over the network, and you
         | can run inference thousands of times, or run pieces of the
         | network, or feed test data in to the network.
         | 
         | I am a casual observer of the field but I see this "AI can't
         | explain itself" thing thrown around a lot by people who don't
         | know about the extensive research being done in explainability.
         | 
         | Also Tesla has a massive testing infrastructure that checks
         | their network for regressions. So they will know if it suddenly
         | starts failing in some area before they release it. Obviously
         | this is new and complex tech so it is not perfect.
         | 
         | But I think self driving is important for their business, and
         | they are probably investing heavily in both batteries and AI.
         | And fully self driving electric taxis could eliminate the need
         | for many people to own an ICE car at all.
        
         | musicale wrote:
         | > AI is not capable of explaining why it makes a decision that
         | it does
         | 
         | There is a whole field of "explainable AI" - presumably in
         | contrast with (the very popular) "opaque, inexplicable AI."
         | 
         | Personally I like decision trees, because you can trace the
         | reasoning.
        
       | josourcing wrote:
       | >a slight alteration in the data they receive can lead to a wild
       | change in outcomes.
       | 
       | In my experience, it's not the alteration that creates an
       | unpredictable outcome; it's the lack of nuance. Accuracy requires
       | an insane amount of details, and it appears that some people
       | might be waking up to that fact that the brain is better suited
       | to handle those nuances.
       | 
       | I work with NLP and have discovered advantages in leaving some
       | decisions up to the person using AI rather than the computer. One
       | advantage being accuracy :-).
        
       | version_five wrote:
       | I wasn't aware of this and I'm happy it was posted. However I
       | have to point out it contains one of the most strained analogies
       | I've seen recently:
       | 
       | > We are saying that there might be a recipe for the cake, but
       | regardless of the mixers you have available, you may not be able
       | to make the desired cake. Moreover, when you try to make the cake
       | with your mixer in the kitchen, you will end up with a completely
       | different cake."
       | 
       | > In addition, to continue the analogy, "it can even be the case
       | that you cannot tell whether the cake is incorrect until you try
       | it, and then it is too late," Colbrook says. "There are, however,
       | certain cases when your mixer is sufficient to make the cake you
       | want, or at least a good approximation of that cake."
        
         | lil_dispaches wrote:
         | It is terrible, they literally can't explain what they mean.
         | They don't say what it means to "compute the A.I. network".
         | Makes me think it is a bogus story, some academic runoff.
        
           | YeGoblynQueenne wrote:
           | To "compute a neural network" is a long-established way to
           | say "train a neural network", which in turn is a long-
           | established way to say "find a set of weights for the neural
           | network that maximises its accuracy".
           | 
           | The idea is that a neural net is a kind of data structure
           | used in AI, like a decision tree or a decision list (like a
           | decision tree but it's a list). There are different
           | algorithms that can "compute", i.e. construct, a decision
           | tree from data. In modern parlance we say that the decision
           | tree is "trained". Same goes for neural nets, except the
           | network itself is typically constructed beforehand, and
           | manually (we refer to it as the "architecture" of the neural
           | net) and only its weights need to be tweaked until it has a
           | good accuracy- at which point we say the training algorithm
           | has "converged".
           | 
           | It's all a bit confusing because in common parlance there is
           | little distinction made between a neural net's network (its
           | architecture), the algorithm that trains the neural net by
           | finding the weights that minimise its error (backpropagation)
           | and the neural net with trained weights (the "model").
           | Sometimes I wonder if this distinction is clear in the minds
           | of people who actually train those things.
           | 
           | Btw, the study is solid and meaningful. It's a theoretical
           | result. More of those are needed in machine learning, we got
           | plenty of empirical results.
        
       ___________________________________________________________________
       (page generated 2022-03-31 23:01 UTC)