[HN Gopher] Stop Calling Everything AI, Machine-Learning Pioneer...
       ___________________________________________________________________
        
       Stop Calling Everything AI, Machine-Learning Pioneer Says
        
       Author : Hard_Space
       Score  : 192 points
       Date   : 2021-10-21 05:51 UTC (17 hours ago)
        
 (HTM) web link (spectrum.ieee.org)
 (TXT) w3m dump (spectrum.ieee.org)
        
       | tabtab wrote:
       | Too late. Marketers commandeered it and ain't giving it back.
       | Create new terms such as ACS: "Artificial Common Sense" for
       | future breakthroughs.
        
       | drpixie wrote:
       | Michael I. Jordan is spot on. We have NO artificial intelligent
       | systems, for any sensible definition of "intelligent". None.
       | ZERO.
       | 
       | We have "big data". We have (brittle and rather poor) "pattern
       | recognition". We have (very limited) "simulate a conversation".
       | But we do not even know what "intelligence" really means!
       | 
       | We (the industry) should recognise "AI" as term that has become
       | essentially meaningless. "AI" is now nothing but a marketing
       | splat, like "New and Improved" or "Low Fat".
        
         | shapefrog wrote:
         | Yeah, but my hot new startup is _real_ AI.
         | 
         | Now where is my funding?
        
         | Salgat wrote:
         | Check out Convolutional Neural Networks. They learn from
         | example images, progressively improving as you train it more,
         | and you can see that the deeper the level, the more abstract
         | the recognition becomes, going from simple shapes and edges to
         | full on eyes, a figure of a person or a car, etc in deeper
         | layers. It's absolutely learned intelligence, not to be
         | confused with sentience.
         | 
         | Remember, a critical part of human intelligence is pattern
         | recognition. If you dismiss pattern recognition as not
         | intelligence, you're dismissing a fundamental part of what
         | makes humans intelligent. It's no different than an insect with
         | the intelligence to recognize predators.
        
         | jcun4128 wrote:
         | I wonder what the breakthrough will be is it hardware or
         | software. Seems like we can make as powerful of a computer as
         | we want/try but what makes sentience. Does a fly or gnat have
         | sentience?
        
           | RansomStark wrote:
           | I know I shouldn't be so pedantic, but you probably don't
           | mean sentience but Sapience [0]. Sentience is the ability to
           | sense and react to stimuli. A fly or a gnat is certainly
           | sentient; they can see, smell, feel the sensation of touch
           | and react accordingly. That is all that is required for a
           | being to be sentient. A really interesting example is if you
           | shock a caterpillar even after metamorphosis the resultant
           | moth remembers the experience and reacts accordingly [1].
           | 
           | Although it is pedantic, it is also an important distinction,
           | sentience and sapience exist on a spectrum. At one end you
           | might have Gnats as purely sentient beings, humans always
           | claim themselves as fully Sapient, so much so we named our
           | species Homo Sapiens.
           | 
           | Different species exist somewhere on this spectrum and where
           | a particular species ends up is subjective. Many people would
           | put Whales and Dolphins [2], and often dogs and cats, further
           | towards the Sapient end of the spectrum (vegans would
           | probably push most mammals towards the sapient end), with
           | insects remaining simply sentient (even for many vegans).
           | 
           | As humans we seem to have an almost inbuilt understanding
           | that not all species are capable of feeling the same way we
           | do, but when you look at the animals we seek to protect and
           | those we don't, what you find is that the less human the
           | species the less we care for the well being of a particular
           | specimen of that species; we care most about the suffering of
           | mammals (more so for the larger ones that the small), and
           | least about the suffering of fish or insects or spiders.
           | 
           | I'd argue that our inbuilt understanding of where an animal
           | fits on the sentient-sapient spectrum is simply how easy it
           | is for us as humans to imagine the plight of a specimen of
           | that species.
           | 
           | What Is It Like to Be a Bat? [3] is an excellent paper on
           | this subject, it argues that we as humans can never fully
           | understand what it is to be a bat, we can imagine what it is
           | like to fly, or echolocate, but that will never be the same
           | as the bats perspective.
           | 
           | From where I'm sitting computers are already sentient, they
           | can sense their environment and react accordingly, self-
           | driving cars are an excellent example, but so is the
           | temperature sensor in my greenhouse that opens a window to
           | keep itself cool; it is sensing the temperature of the air
           | and reacting accordingly.
           | 
           | I in no way believe that my temperature sensor has any
           | sapient qualities, It can't reason about why it's reacting,
           | it can simply react. I don't believe that as the temperature
           | passes the 'open window' threshold that the system recognises
           | the signal as pain. But the same is true of the fly. If I
           | pull a wing off a fly, I know it senses the damage, but does
           | it really feel it as pain?
           | 
           | When considering my cat, if I step on it's tail, I'm sure it
           | feels pain, but is that true or does it simply react in a way
           | that I as a human consider an appropriate reaction to pain.
           | 
           | I can't ever truly understand how my cat feels when I stand
           | on her tail, just as I can't truly know that the fly isn't
           | trying to scream out in horror and pain at what I've just
           | done to it.
           | 
           | It is because of our subjectivity to the placement of animals
           | on the sentient-sapient spectrum and our inability to every
           | fully appreciate the experience of another that I am
           | convinced even if we did create a sapient machine it's
           | experience would be so far removed from our own we would fail
           | to recognise it as such.
           | 
           | The problem with this rabbit hole is, firstly I might
           | convince myself that eating meat is wrong, and well I like
           | bacon too much for that, and the second is that you'll
           | quickly end up in the philosophical territory of I, Robot:
           | 
           | "There have always been ghosts in the machine. Random
           | segments of code, that have grouped together to form
           | unexpected protocols. Unanticipated, these free radicals
           | engender questions of free will, creativity, and even the
           | nature of what we might call the soul. Why is it that when
           | some robots are left in darkness, they will seek out the
           | light? Why is it that when robots are stored in an empty
           | space, they will group together, rather than stand alone? How
           | do we explain this behavior? Random segments of code? Or is
           | it something more? When does a perceptual schematic become
           | consciousness? When does a difference engine become the
           | search for truth? When does a personality simulation become
           | the bitter mote... of a soul?" [4].
           | 
           | [0] https://grammarist.com/usage/sentience-vs-sapience/ [1]
           | https://www.newscientist.com/article/dn13412-butterflies-
           | rem... [2] https://uk.whales.org/whale-culture/sentient-and-
           | sapient-wha... [3] https://warwick.ac.uk/fac/cross_fac/iatl/s
           | tudy/ugmodules/hum... [4]
           | https://www.imdb.com/title/tt0343818/characters/nm0000342
        
             | jcun4128 wrote:
             | Yeah by sentience I did mean more than sensing. As long as
             | Sapience doesn't imply human then I agree. Just about
             | awareness, real awareness... which I don't know what that
             | means, an IMU is aware right, well it's a sensor.
             | 
             | > named our species Homo Sapiens
             | 
             | I see
             | 
             | > I in no way believe that my temperature sensor has any
             | sapient qualities, It can't reason about why it's reacting
             | 
             | Right like the training
             | 
             | > recognises the signal as pain
             | 
             | yeah that's something else too, I know there are concepts
             | like word2vec but still, how do you have meaning
             | 
             | > even if we did create a sapient machine it's experience
             | would be so far removed from our own
             | 
             | Yeah maybe a sociopath machine
             | 
             | That was a great movie
        
         | 72deluxe wrote:
         | Correct. Apparently my phone has "AI" because it recognises a
         | flower as a flower and applies a colour filter and background
         | blur when I use my camera. This is not AI.
         | 
         | By the same extension of logic, any program that recognises
         | input data and performs some form of pre-programmed logic is
         | AI. ie. any computer program?
        
           | sgt101 wrote:
           | Recognizing a flower is absolutely AI.
           | 
           | https://xkcd.com/1425/
           | 
           | ok - or a bird.
        
             | DonHopkins wrote:
             | Not hotdog!
             | 
             | https://www.youtube.com/watch?v=vIci3C4JkL0
             | 
             | https://apps.apple.com/us/app/not-hotdog/id1212457521
             | 
             | https://play.google.com/store/apps/details?id=com.codylab.s
             | e...
        
             | uoaei wrote:
             | If that is intelligent behavior then literally any physical
             | process is "intelligence" embodied, though the magnitude or
             | intensity of this intelligence obviously varies based on
             | what strictly is happening.
             | 
             | This is because anything that happens in reality is
             | computable, and you have described a straightforward
             | computation as "intelligence".
             | 
             | I actually happen to sincerely adhere to this perspective,
             | as a particular flavor of panpsychism.
        
               | sgt101 wrote:
               | A battery discharging does not recognise flowers. The sun
               | does not recognise flowers. I do not create an excess of
               | energy by splitting atoms, these things are not
               | equivalent at all levels of abstraction.
        
               | uoaei wrote:
               | Of course not, and it is silly to try to paint my
               | argument as trying to claim that. A battery discharging
               | is not very intelligent, but the GGP implies that this
               | exists as a gradient down to the lowest levels of
               | physical dynamics.
               | 
               | Put another way, the complexity of the computation
               | determines the complexity of the result. The
               | sun+flower+ground system "recognizes" a flower by means
               | of striking the flower and the surrounding area with
               | photons and "recording" the result as the shadow.
        
           | lost-found wrote:
           | When young children start recognizing shapes, animals,
           | colors, etc, you don't consider that a sign of intelligence?
           | What is it a sign of then?
        
         | zamfi wrote:
         | > We have "big data". We have (brittle and rather poor)
         | "pattern recognition". We have (very limited) "simulate a
         | conversation".
         | 
         | Yes, yes, yes, exactly.
         | 
         | > We have NO artificial intelligent systems, for any sensible
         | definition of "intelligent". None. ZERO.
         | 
         | Yes, though -- what are some of your sensible definitions of
         | intelligence?
         | 
         | > But we do not even know what "intelligence" really means!
         | 
         | ...oh. I mean, you're not wrong, we don't know. But then how
         | can you argue that AI isn't "intelligent"?
         | 
         | What if human "intelligence" really just _is_ pattern
         | recognition too? With maybe some theory of mind, executive
         | function, and  "reasoning" -- Mike complains machines can't do
         | thinking in the sense of "high level" reasoning, though one
         | could argue they just don't have enough training data here.
         | 
         | And everything else is emergent?
         | 
         | Then we're maybe not as _super_ far off.
         | 
         | I'm reminded of the Arthur C. Clarke quote [0]:
         | 
         | > If an elderly but distinguished scientist says that something
         | is possible, he is almost certainly right; but if he says that
         | it is impossible, he is very probably wrong.
         | 
         | [0] https://www.scientificamerican.com/article/remembering-
         | sir-a...
        
         | peterlk wrote:
         | I've been having "AI" debates like this for about 10 years now,
         | and I think they usually go in 1 of 2 directions:
         | 
         | 1. We don't know what intelligence is 2. AI can never be
         | intelligent because humans are special (in various ways)
         | 
         | Of the two, I think that 1 is the more compelling to talk
         | about. Let's look at state of the art Large Language Models
         | (GPT, BERT, BART, T5, etc.) Everyone claims that they can't be
         | intelligent because they're just cleverly predicting the next
         | tokens. The most common failure mode of this is that they
         | hallucinate - if you ask them to do something for you, they'll
         | get it wrong in a way that kind of makes sense. There are some
         | other more subtle problems as well like common sense reasoning,
         | negation, and factuality. We could say that because of these
         | problems they are not "intelligent". But why is that so
         | important? Can we say with certainty that human intelligence is
         | more than just patterned IO? If it is just highly tuned
         | patterned IO with the environment, perhaps we have discovered
         | intelligent systems, but they're handicapped because they're
         | limited in their sensory perception (read: data modalities).
         | And perhaps by combining several of these models in clever
         | ways, we will end up with an architecture for pattern IO that
         | is indistinguishable from human intelligence.
         | 
         | The naysayers claim that this won't work because we'll still
         | end up with mere pattern prediction machines. But this starts
         | to look like a "humans are special" argument.
        
           | rhn_mk1 wrote:
           | Does it even matter "what intelligence is"? Much like "life"
           | [0], the difficulty seems to be coming from being unable to
           | define it, rather than "finding" it. There are multiple ways
           | it can be defined, based on a bunch of different properties,
           | and each definition delivers different outlooks.
           | 
           | Similar to "life", we use "intelligence" in everyday speech
           | without specifying which definition we mean. I don't think
           | that's going to change - it's just as unproductive to limit
           | "life" to a single definiton (what about viruses?
           | unconsciousness? ecosystems?) as it would be with
           | "intelligence" (pets? ants? being able to converse with a
           | human? showing reasoning? creativity?).
           | 
           | But that also means that the popular term "AI" will never be
           | precise.
           | 
           | [0] https://www.quantamagazine.org/what-is-life-its-vast-
           | diversi...
        
           | hdjjhhvvhga wrote:
           | Well, it will be interesting to see how this develops in the
           | future. At some point we will have systems powerful enough to
           | process and learn in real time, also using sensors that are
           | equivalent of human senses (or even more powerful). At this
           | point, if we can successfully model and mimic a typical
           | human, why should it matter if it's not a human?
           | 
           | As for the hallucinating point, I remember a funny story. I
           | once tripped on the curb and fell down; my foot ached for a
           | week. My then 4-year-old daughter took her first-aid set for
           | dolls and tried to "cure" my foot. My mother heard the story
           | and found it cute, so she asked my daughter: "Will you cure
           | me like that, too?" My daughter seemed stupefied and
           | answered: "Are you going to trip and fall, grandma?"
           | 
           | My feeling is that the missing links will be found one day
           | and the AI of the future will be able to apply more adult-
           | like "reasoning."
        
           | mannykannot wrote:
           | As a materialist in matters of the mind, I regard proposition
           | 2 to be an unverifiable belief of those who hold it, but I
           | also regard proposition 1 as being simply a statement of how
           | things currently are: at this point, we do not, in fact, know
           | what intelligence is.
           | 
           | To say that it is "just" highly tuned patterned IO with the
           | environment would be so broad as to be meaningless; all the
           | explanation is being brushed away by that "just", and in the
           | current state of research, no-one has either demonstrated AI
           | or explained intelligence with sufficient specificity for
           | this to be a clearly true synopsis of our knowledge.
           | 
           | You are not quite, however, asserting that to be so, you
           | simply posed the question of whether it is so. In so doing,
           | you are shifting the burden of proof, and proposition 1
           | stands until someone settles the issue by presenting a
           | testable - and tested - theory of intelligence (note that I
           | wrote _of_ intelligence, not _about_ intelligence; we have
           | plenty of the latter that do not rise to the level of being
           | the former.)
           | 
           | My attitude to the current crop of models is that they
           | demonstrate something interesting about the predictability of
           | everyday human language, but not enough to assume that simply
           | more of the same (or something like it) will amount to AI -
           | we seem to be missing some important parts of the puzzle. If
           | a language model can come up with a response coherently
           | explaining why I am mistaken in so thinking, then I will
           | agree that AI has been achieved.
        
         | conductr wrote:
         | I think the technical side of the industry has known this all
         | along. AI is a dream to pursue.
         | 
         | The business/marketing side of the industry has doubled down on
         | the term. Many industries outside have adopted it as a way to
         | make their boring widget sound new and interesting.
         | 
         | I bought a TV with built in AI recently. It's just a TV to me.
         | I'm sure it has some algorithms but that word is old and does
         | not sound premium anymore.
         | 
         | Whenever I see an AI startup, I mostly am expecting its really
         | just some guy doing things that don't scale, like manning a
         | chat bot or something.
        
         | AstralStorm wrote:
         | We have, they kinda suck so far. Look up DeepMind attempts at
         | DDQN game attacks where said AI develops new strategies for an
         | entirely new game. And attempts to solve Montezuma's Revenge
         | and other Atari classics, both by DeepMind and OpenAI. Both of
         | the systems are somewhat adaptable to new problems too. There
         | also is Uber's Go-Explore and RMT's.
         | 
         | These are closest we came to intelligence. They deal with big
         | unobservable state and novelty with few shot learning, few
         | objectives and sparse reward. They haven't quite cracked
         | automated symbolization. (The AIs do not quite create a
         | complete symbolic representation of the game.)
         | 
         | I recommend following AAAI conferences for more details.
        
         | jjcon wrote:
         | 'AI' as a term has been used by people in the industry for
         | decades even by early computer science pioneers reffering to
         | incredibly simple applications - it was hollywood that
         | appropriated the term to the likes of skynet - not the other
         | way around.
        
         | silent_cal wrote:
         | You said it man, spot on man (* [?]? *)
        
       | senectus1 wrote:
       | Could not agree more.
       | 
       | The watering down or whitewashing by overmarketing of these terms
       | is a significant issue.
        
       | hprotagonist wrote:
       | "stochastic optimization" is a lot more honest, but much less
       | easily funded.
        
       | mszcz wrote:
       | That's what pisses me the most about world these days, my pet
       | peeve. Words seem to be losing their meaning. People just throw
       | whatever words they want.
       | 
       | Recently I've been shopping for a steam station and I've seen
       | that the top of the line Philips Stream&Go has a camera "with AI"
       | that recognizes the fabric. The sales guy was persistent that
       | that claim was true. Oh please. If it was, it was only in the
       | simplest, most technical way possible so as not to get sued.
       | There's more heuristics in there than anything.
       | 
       | Or the "luxury & exclusive iPhone etui for $9.99". Or "we value
       | our customers privacy". Or the Apple "Genius". Or the amount of
       | "revolutionary" things "invented" at Apple for that matter (not
       | that they don't, just not as much as they claim).
       | 
       | (BTW, don't know how I landed on Apple at the end, they're not
       | the worst offenders)
        
         | hdjjhhvvhga wrote:
         | My pet peeve is "it will blow your mind." Hasn't happened to me
         | ever. Also "exciting." Yes, I was excited when I first saw
         | C-128 with its dual C-64 and CP/M modes. When I saw Linux for
         | the first time on a 386. When I created my first app for the
         | iPhone 3GS. But for marketing folks everything (they do) seems
         | exciting. No, your revolutionary project is not revolutionary
         | and your innovation means gluing together many ideas of others
         | (who also borrowed it from other people tbh).
        
           | rambojazz wrote:
           | They are not excited about any product. They're only excited
           | by the thought of how much money they can make with it. The
           | more money, the more exciting the product is. "This product
           | is exciting! Why are you not excited? I'm gonna make a load
           | of money by selling it to you!"
        
         | xyz11 wrote:
         | Agreed, just because some words are catchy and appear
         | intellectual does not mean they should be used everywhere; it
         | is very misleading and sometimes unprofessional.
        
         | echopurity wrote:
         | These days, eh? Maybe you just started to notice, but words
         | never had much meaning.
        
       | calibas wrote:
       | >today's artificial-intelligence systems aren't actually
       | intelligent
       | 
       | Artificial butter isn't actual butter, that's why we call it
       | artificial.
       | 
       | Personally, I prefer saying ML just because of all the semantics
       | regarding AI.
        
       | hdjjhhvvhga wrote:
       | This ship has long sailed, today everything is AI. It has nothing
       | to do with AI but the necessity of using the current buzzword
       | dominating the market. Before it was the web, XML, Ajax, OO,
       | cloud, whatever. Now it's "AI." It means absolutely nothing.
       | People implement simplest algorithms with a couple of _case_
       | statements and call it AI without a blink. Everyone else jumps
       | the train as they don 't want to be perceived as obsolete. All
       | this happens along (sometimes totally independently of) some
       | real, interesting developments in machine learning.
        
       | nbardy wrote:
       | I used to be in the grouchy, "Stop calling it AI camp". But the
       | last three years of progress are impossible to ignore. Scaling
       | models is working better than anyone thought. We're in for a wild
       | ride with this new AI.
        
       | 1cvmask wrote:
       | Was there ever a moment in time when every company called
       | themselves an "Excel" company? I feel the same about "AI".
        
       | indymike wrote:
       | What is the marketing team going to do without machine learning,
       | artificial intelligent, blockchain driven cloud based PAAS?
        
         | pjmorris wrote:
         | Clearly, quantum blockchain driven cloud based PaaS.
        
           | mlac wrote:
           | Please let me know when this is ready for prime time - I'm
           | really interested in bringing an Intelligent Quantum (IQ)
           | Blockchain driven cloud based PaaS security platform to
           | market. The secret of IQ technology, is that, while the
           | others have "artificial" intelligence, there is nothing fake
           | about ours.
           | 
           | I believe we could realize great value with a blue ocean
           | strategy like that.
        
           | unemphysbro wrote:
           | This certainly is the future of Impact-Driven Value as a
           | Service (I-DVaaS).
        
         | toomanybeersies wrote:
         | They might have to switch to some new up-and-coming buzzwords,
         | such as "supply chain", or "graph".
         | 
         | Here's a great example I found after a quick search [1]:
         | 
         | > Why TigerGraph, a Native Parallel Graph Database for Supply
         | Chain Analysis?
         | 
         | > Manage Supply Chains Efficiently With Deep Link Analytics
         | 
         | Oh, and then there's also "Internet of Things":
         | 
         | > Uncover Insights From Temporal Analysis of Internet of Things
         | Data
         | 
         | [1] https://www.tigergraph.com/solutions/supply-chain-analysis/
        
       | nikanj wrote:
       | How else are we going to get funded?! Besides, the board came
       | down on the CEO hard, demanding we have an AI strategy. Calling
       | our round-robin load balancer "heuristic AI" gives us a break.
        
         | blackbear_ wrote:
         | And don't forget "rule-based AI"!
        
       | [deleted]
        
       | Veedrac wrote:
       | People were calling everything agent-like AI since forever. Video
       | games had AI. Heck, Pacman's ghosts had AI. Nobody cared. They
       | understood what it was referring to.
       | 
       | I only started hearing how it would 'dilute the language' or the
       | term was 'disingenuous' or whatever once AI started actually
       | doing intellectually impressive feats, and the naysayers started
       | trying to redefine language to suit their prefered worldview.
       | 
       | The fact is the term has always been used loosely. Even if it
       | hadn't been, several of the undisputed top machine learning
       | corporations (eg. DeepMind) has the explicit goal of reaching
       | general intelligence, which remains true even if you are sure
       | they will fail. Its use as a term is more appropriate than ever.
        
       | giardini wrote:
       | I periodically re-read the following to keep my perspective on
       | AI:
       | 
       | "ARTIFICIAL INTELLIGENCE MEETS NATURAL STUPIDITY" by Drew
       | McDermott
       | 
       | https://www.researchgate.net/publication/234784524_Artificia...
       | 
       | (click on "Read Full Text" and then scroll down a half page.)
        
       | Raro wrote:
       | "AI is 'Machine Learning' for journalists"
       | 
       | Unfortunately I can't recall where I first heard this great quip.
        
         | marginalia_nu wrote:
         | "'Machine Learning' is Linear Algebra for marketers"
         | 
         | You read it here first.
        
       | DeathArrow wrote:
       | AI means almost intelligent.
        
       | m0zg wrote:
       | This is pissing into the wind. There's nothing any "machine
       | learning pioneer" can do about marketing, and this is marketing.
       | And it's all right. People are beginning to catch on that we
       | won't have "thinking robots" or "self driving cars" in the
       | foreseeable future. Doesn't mean that it's not intelligence or
       | that it's not useful. It's also most definitely "atificial". What
       | we call it is a matter of tertiary importance at best.
        
       | mathematically wrote:
       | Unrelated to the article but I ran across an interesting result
       | recently that is related to AI (and the hype surrounding it): Let
       | A be an agent and let S be a Turing machine deductive system. If
       | A correctly understands S, then A can correctly deduce that S
       | does not have A's capability for solving halting problems. [1]
       | 
       | One way to interpret this is that all existing AI systems are
       | obviously halting computations simply because they are acyclic
       | dataflow graphs of floating point operations and this fact is
       | both easy to state and to prove in any formal system that can
       | express the logic of contemporary AI models. So no matter how
       | much Ray Kurzweil might hope, we are still very far from the
       | singularity.
       | 
       | 1: https://link.springer.com/article/10.1007/s11023-014-9349-3
        
         | IshKebab wrote:
         | > all existing AI systems are obviously halting computations
         | simply because they are acyclic dataflow graphs
         | 
         | No they aren't. Think about LSTMs for example.
        
           | mathematically wrote:
           | So how do you get a value out of an LSTM?
        
             | IshKebab wrote:
             | Run it for as long as you want and look at the output
             | state.
             | 
             | How do you the a value out of a person?
        
               | mathematically wrote:
               | So then in the statement of the theorem the agent A can
               | determine how many cycles the unit will run before
               | halting, correct?
        
         | AstralStorm wrote:
         | Your argument is wrong or miscommunicated. The AI itself can
         | figure only some of the halting problem results, not all, plus
         | it can make a mistake. It is not an oracle.
         | 
         | Recursive neural networks are not necessarily halting when
         | executed in arbitrary precision arithmetic.
        
           | mathematically wrote:
           | Have you read the referenced article?
        
       | binarymax wrote:
       | I propose we actually HIJACK the AI acronym to be 'Advanced
       | Instruments'.
       | 
       | I've had a ranty blog post in my head about this for a while so
       | really glad to see this article.
       | 
       | People are going to use AI forever - but we need to change what
       | it stands for, since they will never switch to using machine
       | learning or ML.
        
         | dr_dshiv wrote:
         | I like Artifact Intelligence. Then we can explode the term use
         | even further, which I think would be better than moving goal
         | posts.
        
       | pkrumins wrote:
       | My AI is the `if` statement.
        
       | jvanderbot wrote:
       | I'm in the submarine AI camp[1]. I'd rather build tools that
       | kinda help humans do thinking, rather than thinking machines.
       | Like how submarines help humans swim but who cares if they are
       | actually swimming.
       | 
       | 1. "The question of whether a computer can think is no more
       | interesting than the question of whether a submarine can swim." -
       | Edsger Dijkstra
        
         | joconde wrote:
         | > 1. "The question of whether a computer can think is no more
         | interesting than the question of whether a submarine can swim."
         | - Edsger Dijkstra
         | 
         | Not sure what he meant by that, it doesn't really make sense to
         | me: thinking yields information and deductions that can be very
         | useful, while swimming by itself accomplishes nothing useful.
         | 
         | Actually, I just can't make any sense of the sentence, because
         | while I have a general idea of what a thinking machine could
         | be, I have no clue what it means for a submarine to "swim",
         | versus whatever the alternative is ("self-propelling"?).
        
           | lkschubert8 wrote:
           | The point is the goal of a submarine is to allow humans to
           | traverse large distances underwater, the semantics of how it
           | does so are unimportant. Similarly the ability of a computer
           | to think is moot, its the results we get from that that
           | matter.
        
           | blacksmith_tb wrote:
           | Not a polymath physicist, but I take Djikstra to be saying
           | "don't obsess so much over the differences you fail to see
           | the similarities". The submarine doesn't swim, but it still
           | gets somewhere in the water.
        
             | mirker wrote:
             | [And a program/computer may not think yet still does
             | interesting computations]
        
           | robbrown451 wrote:
           | His point was that we might say "submarines aren't actually
           | swimming" while saying "of course airplanes actually fly."
           | 
           | What's the difference? The only difference is semantic, we
           | seem to have defined "swimming" to be arbitrarily restrictive
           | (as in, only applying to animals), while we haven't done so
           | for flying.
           | 
           | Meanwhile, we can say a magnet is attracted to metal, and no
           | one says "wait, it can't ACTUALLY be attracted to it, since
           | that takes a brain and sentience and preferences."
           | 
           | And then, most of us don't bat an eye if someone says "my
           | computer thinks the network is down" or "my phone doesn't
           | recognize my face" or even "the heater blowing on the
           | thermostat is making it think it is hotter than it is." It's
           | not helpful to interject with, "well, thermostats don't
           | really think."
           | 
           | The point is, these are arbitrary choices of how we define
           | words (which may vary in different languages), and they
           | aren't saying meaningful things about reality.
        
         | rexreed wrote:
         | That's called Augmented Intelligence, a well-trod field and a
         | better ROI than aspects of autonomous and human-replacement
         | intelligence.
        
         | billyjobob wrote:
         | There are lots of creatures that can swim. Most of them swim
         | better than a human. That's why it's not very interesting when
         | we discover a new one.
         | 
         | In contrast, humans are arguably the only creatures in the
         | universe that can think. Certainly we have never discovered a
         | creature that can think _better_ than a human. Thus it would be
         | highly exciting if we discovered or created one.
        
           | ekianjo wrote:
           | The human mind is so flawed I am not sure its a good model to
           | build thinking machines.
        
             | sgt101 wrote:
             | no one has a good model of a human mind
        
               | smolder wrote:
               | Depends on what you mean by good, and what level of
               | detail you expect. People certainly have developed
               | accurate enough models to be able to exploit biases and
               | predict & influence patterns of behavior in others.
        
           | netizen-936824 wrote:
           | >In contrast, humans are arguably the only creatures in the
           | universe that can think.
           | 
           | Citation very strongly required to back up this statement.
           | I'm not aware of any evidence that this is the case, in fact
           | its quite the opposite.
        
             | [deleted]
        
             | herlitzj wrote:
             | Second. Not sure how "think" is being defined here, but
             | even for a spatial/temporal sense of self or mortality I'd
             | say that's probably an incorrect statement. Let alone the
             | general notion of thinking. E.g. dogs exhibit basic
             | thinking like predetermination, delayed gratification, etc.
        
           | thesz wrote:
           | There are many humans that think better than, well, average
           | human. If not in all areas of life, but in one or couple of
           | them, at the very least.
           | 
           | Frankly, one of reasons I visit HN is to find these. ;)
           | 
           | Flattering aside, what fascinates me is the _difference_ in
           | thinking - a new approaches, not seen or thought by me.
        
           | jvanderbot wrote:
           | I can see that. Its just not something I'm particularly
           | interested in.
           | 
           | I'm not an expert in brains. After taking a few casual
           | courses I saw recommended here, and reading some of the
           | books, I'm suspicious that our story about our own
           | intelligence is somewhat oversold, and we may find that our
           | intelligence is 99% mundane, custom-built automation and 1%
           | story telling. But that story telling is pretty magical.
           | 
           | Buddhism and Modern Psychology
           | 
           | Happiness Hypothesis (which is more about brains than
           | happiness)
        
           | cronix wrote:
           | How does a creature proactively use tools to more easily
           | prepare/gather foods without "thinking?"
           | 
           | How does a young creature observe an elder doing something,
           | and then copy it, without some form of thought occurring? It
           | seems the elders are teaching the youth, and the youth are
           | learning, but I'm not sure how that can happen without
           | thinking.
           | 
           | https://www.youtube.com/watch?v=5Cp7_In7f88
        
         | bitwize wrote:
         | The thing about that quote is I used to work on AUVs. And we
         | would routinely say "The AUV swims out to a waypoint" or
         | whatever. So, having established in my mind that a machine can,
         | in some sense, swim, talking about a machine thinking made a
         | whole lot more sense. Things like "The computer thinks I'm a
         | different user than I actually am" or "The computer is thinking
         | of a solution that satisfies the constraints" seemed less
         | needlessly anthropomorphizing.
        
         | RicoElectrico wrote:
         | Me too, but I am afraid the human end needs to think critically
         | as well. You don't need ML/AI to have people do stupid shit
         | because "computer told them to do so". A good example would be
         | validators bundled with the web OpenStreetMap editor iD, which
         | are good from far, far from good.
         | 
         | https://lists.openstreetmap.org/pipermail/talk/2021-July/086...
        
           | jvanderbot wrote:
           | I mean, you can't make a hammer and expect it to be as useful
           | to someone without hands or nails.
        
         | webmaven wrote:
         | _> "The question of whether a computer can think is no more
         | interesting than the question of whether a submarine can
         | swim."_ - Edsger Dijkstra
         | 
         | Something about that quote always bothers me. I'm not sure
         | what, but if I were to try to express the same fundamental
         | idea, I would probably phrase it as "whether a bicycle can
         | walk".
        
           | jhncls wrote:
           | You can read Edsger's handwritten quote in the third
           | paragraph at page EWD898-2.
           | 
           | [0] https://www.cs.utexas.edu/users/EWD/ewd08xx/EWD898.PDF
        
           | robbrown451 wrote:
           | I think you are missing the point.
           | 
           | A submarine "swims" in the same way an airplane "flies."
           | 
           | But for some reason, in English anyway, we seem to define
           | "swim" such that, by definition, only an animal can do it. Or
           | at least, it must behave similarly to a swimming
           | animal...such as by flapping fins and the like.
           | 
           | Meanwhile we don't think an airplane doesn't fly simply
           | because it doesn't flap its wings.
           | 
           | The point is that semantics tells you very little about
           | reality. And debating semantics is not particularly
           | interesting, or at least a good way of missing the point.
           | 
           | Walking bicycles? Well, ok. That's much more of a stretch,
           | partly because of the fact that there is a human much more
           | directly involved in the propulsion of a bicycle compared to
           | things with actual engines (airplanes and submarines)
        
             | OJFord wrote:
             | I don't know how useful/meaningful any of this is? It's
             | just language quirks, and different languages have
             | different quirks. In Hindi for example the verb 'to walk'
             | is the same as ~'to operate' - so you can 'walk', and 'walk
             | a bicycle', etc.
        
       | Barrin92 wrote:
       | People often bring up the quip that "as soon as something is AI,
       | people stop calling it AI" but I think pretty much the opposite
       | has happened, the term has been expanded so much that almost any
       | stochastic computation that solves some computational problem is
       | now shoved under that umbrella.
       | 
       | However unlike jordan in the article who takes that as an
       | opportunity to focus even more on engineering and practical
       | applications I think it's well worth going back to the original
       | dream of the 1950s. I'd be happy if people return to the drawing
       | board to figure out how to build real, artificial life and how to
       | create a mind.
        
         | tluyben2 wrote:
         | People in my circles stopped saying AI and switched to ML after
         | it 'did something previously identified as more or less
         | intelligent'. I see this is more or less the norm.
         | 
         | That makes some sense because once we understand how things
         | work they become repeatable and thus not-that-intelligent in
         | the eye of many.
         | 
         | And that, again, makes sense considering we are nowhere near
         | anything that has the 'I' in it: it's pattern matching which is
         | statistics and it works on a massive scale due to tech/gpu and
         | ago advances but there is no 'I' there. The most basic mistakes
         | are not correctable by the most advanced stuff we have and as
         | such, it gets things it is 'bred for' wrong all the time, with
         | sometimes disastrous outcomes.
         | 
         | This stuff should be in labs, not identifying gun shots, faces
         | of criminals, driving cars, identifying 'fraudulent' people and
         | such. Because of the 'I' missing, this is a nightmare and it
         | will get worse.
        
         | darksaints wrote:
         | The description of the Turing test certainly alludes to a test
         | of machine intelligence. So to some degree it is appropriate
         | for AI to have a large umbrella.
         | 
         | My problem with the endless battle to define AI is that there
         | are a hopelessly clueless cohort of people with money that
         | chase the term like its their golden goose, and therefore
         | anybody that wants funding needs to market their thing as AI.
         | 
         | And I think a lot of researchers cringe at the wanton use of AI
         | because it devalues the work that they're doing. I just give it
         | the shoulder shrug of approval - "I get it, you need funding
         | too". And from that perspective I really wish that tree-based
         | ML methods and logic programming languages and rule engines
         | were still called AI, because they're really cool but horribly
         | neglected because theyre not the latest thing.
        
       | bitwize wrote:
       | Today's AI is, as I like to call it, statistics with sexy
       | marketing. A lot of AI programming consists of loading Python
       | modules that correspond to various models, and fooling around
       | with them to see which best fits the data you have. In other
       | words, you're working more like a mathematician experimenting
       | with potential solutions.
       | 
       | There's pressure at my job for architects to "leverage AI". What
       | I always suggest to them is to find a statistician and see if
       | things like neural networks are even necessary before committing
       | to them. Sometimes the problem could best be solved with a
       | heuristic, a rules engine, or a simple statistical model like
       | linear regression, in which case "leveraging AI" is merely hype.
        
       | shoto_io wrote:
       | It's all about definition here. Seth Godin once said:
       | 
       |  _> One common insightful definition of AI: Artificial
       | Intelligence is everything a computer can't do yet. As soon as it
       | can, we call it obvious._
       | 
       | What is AI? Let's define it first.
        
       | winterismute wrote:
       | Title probably meant "AI Pioneer".
        
       | dr_dshiv wrote:
       | The term Artificial Intelligence implicitly makes people design
       | systems that don't involve people--because if people have an
       | information processing role, then it seems like the system isn't
       | artificial yet, and therefore unfinished. That's rather unhealthy
        
         | webmaven wrote:
         | _> The term Artificial Intelligence implicitly makes people
         | design systems that don't involve people--because if people
         | have an information processing role, then it seems like the
         | system isn't artificial yet, and therefore unfinished. That's
         | rather unhealthy_
         | 
         | Well put, but doesn't go quite far enough. Because those
         | systems _do_ still involve people, it only as objects to be
         | acted upon, rather than actors with agency. Which isn 't just
         | unhealthy, it is downright pathological (and often socio- or
         | psycho-pathic).
         | 
         | We've seen this sort of creeping bias before, with terms such
         | as "Content Management System" displacing more human centric
         | terms such as reading, writing (or even authoring), sharing,
         | and publishing. "Content" is just an amorphous mass that is
         | only produced in order to be "managed", poured into various
         | containers, distributed, and delivered in order to be consumed.
        
       | Volker_W wrote:
       | I think we need another word for AI, one that only real
       | programmers and mathematicians know.
       | 
       | That way, we can talk about that stuff without some incompetent
       | journalist or marketing salesman saying stupid stuff.
        
         | gcr wrote:
         | Call deep learning what it is: learnable nonlinear function
         | approximation.
        
       | dr_dshiv wrote:
       | We could just accept that AI is incredibly pervasive and doesn't
       | require computers (for instance, autopilot in planes were
       | invented in 1914).
        
       | halotrope wrote:
       | The George Carlin bit [1] about euphemisms comes to mind. While
       | not quite in the lens of marketing hyperbole it captures really
       | well what it means to dilute the language. The result is a loss
       | of meaning and expressive power. If everything is something,
       | nothing is.
       | 
       | 1: https://youtu.be/vuEQixrBKCc
        
       | rspoerri wrote:
       | Interestingly as soon as a system works or we understand it's
       | principles we stop using the term "artificial intelligence". It
       | becomes image recognition, autonomous cars, or face detection. We
       | mostly talk about artificial intelligence when we dont really
       | know or understand what we are talking about :-)
        
         | Aicy wrote:
         | Exactly, the same happens with the word "technology".
         | 
         | Are scissors technology? They used to be but now they are
         | comparatively too well understood and simple. Just like with
         | AI, we label things technology that are on the forefront and
         | not yet well understood and ironed out.
        
           | webmaven wrote:
           | _> Are scissors technology?_
           | 
           | To an anthropologist or archeologist scissors are most
           | definitely a technology. As are basket weaving, flint-knapped
           | tools, speech and writing, etc.
        
           | klyrs wrote:
           | Yes, even simple machines are technology.
        
         | sidpatil wrote:
         | This is known as the _AI effect_.
         | 
         | https://en.wikipedia.org/wiki/AI_effect#AI_applications_beco...
        
         | cgearhart wrote:
         | I think it's more an issue of talking past each other. In order
         | to build a program, you kind of need to specify the required
         | capabilities, and then standard engineering practice is to
         | decompose that into a set of solutions tailored to the problem.
         | But intelligence is not just about a list of capabilities;
         | they're necessary conditions, but not sufficient.
         | 
         | This is what leads to the AI effect conflict. We describe some
         | capabilities that are associated with intelligence, build
         | systems that exceed human performance ability on a narrow range
         | of tasks for those capabilities, and then argue about whether
         | or not that limited set of capabilities on a narrow domain of
         | tasks is sufficient to be called "intelligence".
         | 
         | Recognizing faces, playing chess, and predicting the next few
         | words in my email are all things I'd expect an intelligent
         | agent to be able to do, but I'd also expect to be able to pick
         | up that agent and plop it down in a factory and have it operate
         | a machine; put it in the kitchen and have it wash my dishes; or
         | bring it to work and have it help me with that. We already have
         | machines that help me do all of those things, but none of them
         | really exhibit any intelligence. And any one of those machines
         | getting 10x better at their single narrow domain still won't
         | seem like "intelligence".
        
         | AstralStorm wrote:
         | Not really, the autonomous machine learning systems designed to
         | solve games with limited observability are called AI properly,
         | unlike say game scripts on fuzzy logic that are attempting to
         | fool the user into thinking the machine is actually smart; or
         | ones that work on fully observable games. (So no, AlphaZero is
         | not exactly an AI.)
         | 
         | And for an intelligence, they are still pretty bad at figuring
         | things out.
        
         | 6gvONxR4sf7o wrote:
         | There's more nuance here than is usually given credit. I think
         | it's more that once we understand the principles and the system
         | works, we realize it never really needed AI. Here's how the
         | story goes:
         | 
         | Someone comes along and poses a problem. It seems like an
         | automated solution will require building something like R2-D2.
         | In other words solving it will need AI. Then someone else comes
         | along and finds a way to solve it that looks nothing like
         | R2-D2. Maybe they solve it with a massive dataset and a GLM.
         | Turns out it never really needed AI.
         | 
         | As a field, we're prone to thinking R2-D2 is necessary and
         | sufficient for a task, when it keeps turning out it's a
         | sufficient condition but not a necessary one for so many tasks.
        
         | aspaceman wrote:
         | I look forward to rereading this comment repeatedly over the
         | next decades. What pointless definition chasing.
        
       | tshoaib wrote:
       | Is Michael I. Jordan the Michael Jordan of AI research?
        
       | tibiahurried wrote:
       | My background is in automation and robotics; I studied system
       | identification: a discipline where you would use mathematical
       | means to identify a dynamic system model by observing
       | input/output.
       | 
       | You treat the system as a black box and estimate a set of
       | parameters that can describe it (e.g., Kalman filter).
       | 
       | I struggle to understand what's the fundamental difference
       | between system identification and ML/AI. Anyone?
       | 
       | You ultimately have a bunch of data and try to estimate/fit a
       | model that can describe a particular behavior.
       | 
       | It all comes down to a big optimization/interpolation problem.
       | Isn't what they call "Learning" just really "estimating" ?
       | 
       | Then the more CPU/memory/storage you have, the more
       | parameters/data you can estimate/process, the more accurate and
       | sophisticated the model can be.
        
       | [deleted]
        
       | oblak wrote:
       | "The real problem is not whether machines think, but whether men
       | do."
       | 
       | - B.F. Skinner
       | 
       | In all honesty, I learned that from Civilization IV
        
       ___________________________________________________________________
       (page generated 2021-10-21 23:02 UTC)