[HN Gopher] An argument for the impossibility of machine intelli...
       ___________________________________________________________________
        
       An argument for the impossibility of machine intelligence [pdf]
        
       Author : imaurer
       Score  : 73 points
       Date   : 2021-11-20 16:24 UTC (6 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | snek_case wrote:
       | The most obvious counter-argument is that the amount of things we
       | can do with AI keeps expanding. People were incredulous that
       | computer chess programs could beat humans in the 1980s. Now they
       | can beat us at basically any board game including Go, do image
       | classification, and we have some early prototypes of self-driving
       | cars.
       | 
       | AI hasn't mastered common-sense reasoning yet. That's likely
       | going to come last, but the amount of things AI can understand is
       | set to only expand IMO.
        
         | The_rationalist wrote:
         | Well I am not defending the paper thesis but no it's time to
         | realize that we are in a new AI winter where progress has
         | stopped. Sure we can make accuracy progress on tasks that were
         | underesearched before, moreover we do make extremely slow (and
         | with increasingly diminishing returns) accuracy gains on core
         | tasks. But the diminishing returns are diminishing fast to the
         | point that progress in terms of applications has stopped for
         | core AI tasks such as NLU.
         | 
         | However there is still some hope as the vast majority of papers
         | bring an innovation but almost never attempt to merge/synergize
         | with other papers innovations. If human resources where
         | allocated at merging the top 10 papers on a given task, I'm
         | sure it would lead to a major accuracy improvement.
        
         | TheOtherHobbes wrote:
         | Define "understand."
         | 
         | I think you may be confusing automated processing with
         | communicable abstracted insight.
         | 
         | If this isn't obvious consider the difference between producing
         | an AI that can play chess, producing an AI that learns to play
         | chess, and producing a research program that produces an AI
         | that can play chess and summarises all the resulting
         | developments and insights.
        
           | naasking wrote:
           | It's not clear whether there is a difference in kind between
           | those behaviours rather than merely a difference in degree of
           | complexity.
        
         | ben_w wrote:
         | One thing I've long noticed is that "common sense" is analogous
         | to a stopped clock, in that it's its only correct when it
         | happens to also be a different form of reasoning such as
         | deductive, inductive, or abductive reasoning. Things called
         | "common sense" but which are not also a different kind of
         | knowledge are mere cultural shibboleths, and vary from wrong
         | (fan death) to opinion (Shakespeare is good).
         | 
         | The traditional examples of common sense knowledge given when
         | introducing the topic of A.I. are sufficiently imprecise to
         | only be true _given further common sense interpretation_. For
         | example: "things fall when you let go of them" unless they're
         | buoyant, or they fly, or they're already on the ground, or you
         | were in free-fall when you let go -- these exceptions won't
         | really surprise anyone, and yet it's both more compact and more
         | accurate to say Sf=ma, f_g=G(m_1)(m_2) /r^2 etc.
        
       | dsr_ wrote:
       | This appears to be a series of arguments from incredulity.
       | 
       | In particular, it is equally incredible that intelligent life
       | should evolve from a single-cell organism. But we have that as a
       | counter-argument.
       | 
       | It is entirely reasonable to suspect that none of the current
       | approaches will yield success, but claiming that no machine
       | intelligences can possibly arise is... incredible.
        
         | 9wzYQbTYsAIc wrote:
         | Agreed.
         | 
         | The main claim being made is that "since AI is a logic system,
         | and living humans are complex systems, AI cannot replicate
         | human intelligence".
         | 
         | That claim rests on some unfounded, and implicit, assumptions.
         | In particular, the author assumes that neural networks are not
         | complex systems (and as an even deeper, implicit assumption,
         | that no complex neural network could ever exist).
        
           | hnaccount_rng wrote:
           | No the assumption is that a logic system cannot be complex...
           | well 2-SAT likes to have a word with the author I guess
        
             | 9wzYQbTYsAIc wrote:
             | Indeed, that does appear to be another of the many
             | assumptions made in the article.
        
             | abetusk wrote:
             | I'm not sure if it was intentional or not but 2-SAT is
             | polynomial solvable [0] whereas 3-SAT is NP-Complete [1].
             | 
             | [0]
             | https://en.wikipedia.org/wiki/2-satisfiability#Algorithms
             | 
             | [1] https://en.wikipedia.org/wiki/Boolean_satisfiability_pr
             | oblem...
        
         | qsort wrote:
         | Agreed. I am also extremely skeptical of AI, but while the
         | paper does a good job at highlighting the problems with AI, the
         | eventual conclusion is not at all well-supported.
         | 
         | There's an hidden assumption that complex systems cannot be
         | modeled mathematically at all, but while that can be true right
         | now, there is no fundamental reason why satisfactory models
         | can't be produced at all.
        
           | pfdietz wrote:
           | Also, the assumption that mathematically modeling a system is
           | necessary for AI.
        
             | sgt101 wrote:
             | There is a question as to are the systems of mathematics
             | that human cognition can conceive adequate to represent the
             | processes and mechanisms of human cognition or equivalent
             | systems. Basically - can we write 'ourselves' down and if
             | we can, can we read what we have written.
        
               | simonh wrote:
               | We've already made computer systems so complex we don't
               | know precisely how they work. Alpha Zero or GPT-3 for
               | example.
        
               | NineStarPoint wrote:
               | We don't know how they work exactly, but we do know the
               | mathematics that create them.
               | 
               | The question is if the systems that generate complex
               | intelligence are too much for humans to create, not just
               | the phenomenon that emerge from those systems.
        
       | Traubenfuchs wrote:
       | Should we ever attain hardware, software and understanding of the
       | human brain good enough to emulate a human brain, we have done
       | it.
       | 
       | There is absolutely no reason why this shouldn't be possible.
       | Actually, we could already do it if we understood the brain
       | enough and could model it good enough, even if the emulation
       | might not be real time.
        
       | Dr_Birdbrain wrote:
       | I hope that it turns out that this paper was written by GPT-3 :)
        
       | erdewit wrote:
       | In the same vein that heavier-than-air flying machines are
       | impossible.
        
       | SubiculumCode wrote:
       | and this is why arxiv is not the same as peer review.
        
       | _aavaa_ wrote:
       | I'd like to point the reader's attention to [1].
       | 
       | [1] https://arxiv.org/abs/1703.10987
        
         | a-dub wrote:
         | a breath of fresh air on the topic!
        
         | visarga wrote:
         | That was a good one.
        
         | mcguire wrote:
         | " _In recent years, a number of prominent computer scientists,
         | along with academics in fields such as philosophy and physics,
         | have lent credence to the notion that machines may one day
         | become as large as humans. Many have further argued that
         | machines could even come to exceed human size by a significant
         | margin. However, there are at least seven distinct arguments
         | that preclude this outcome. We show that it is not only
         | implausible that machines will ever exceed human size, but in
         | fact impossible._ "
        
       | R0b0t1 wrote:
       | Yet we are machines...?
       | 
       | Speaking specifically of neural networks as they exist now the
       | answer is no because there is no obvious way to learn.
        
         | sgt101 wrote:
         | Are machines all computation? Are all the processes of the
         | physical universe computation?
        
           | ChainOfFools wrote:
           | all of these discussions eventually reveal themselves to be
           | special framing of the old Parmenides question about
           | determinism, whether we live in a block universe where choice
           | and change are illusions, and thought and being are the same.
           | I am increasingly convinced he is right, and that arguments
           | such as the OP (and Searle-ism generally) present end up
           | refuting not the existence of artificial intelligence, but
           | intelligence itself. "Artificial" smuggles in naturalistic
           | fallacy and privileges the dualism hypothesis.
        
           | mcguire wrote:
           | If you are a materialist, yes and yes. If not, all bets are
           | off and there are no rules.
        
             | NineStarPoint wrote:
             | I don't think being a materialist implies that. It's
             | entirely possible for matter/the fabric of the universe to
             | have non-computable properties.
        
               | R0b0t1 wrote:
               | Computers exist inside the universe so the universe must
               | be able to compute things. Likewise you can look for
               | certain hallmarks of information manipulation that mean
               | you are computing something.
               | 
               | Usually philosophers talking about these things either
               | haven't read or are just discovering complexity theory.
        
       | go_elmo wrote:
       | The Turing machine was designed by imagining a human-operator.
       | Our Mind has also only a finite state, and no matter if quantum
       | effects are involved, the information in it is always finite,
       | describable in a finite state. Thus, all turing machines are
       | capable do do exactly what we do with information. This argument
       | is incredible.
        
         | Isinlor wrote:
         | This is Russell's teapot. You can not prove that infinite
         | states do not exist. In fact, common models of physics assume
         | possibility of infinite number of states by depending on
         | formalism axiomatically assuming infinite sets (e.g. axiom of
         | infinity in Zermelo-Fraenkel set theory).
        
       | visarga wrote:
       | What a funny a priori paper. Maybe the authors lost a bet and had
       | to write it.
        
       | natch wrote:
       | "The authors declare that they have no conflict of interest."
       | 
       | "Department of Philosophy"
       | 
       | hmm
        
       | doganulus wrote:
       | Their premises about logical systems are wrong so their
       | conclusion is not valid. In short, of course, there are logical
       | systems with potentially infinite state space. For example, a
       | Turing machine. A digital circuit is no different. Turing
       | completeness is abundant, it is everywhere.
        
       | jmull wrote:
       | The paper is full-on nonsense. I'm surprised someone wasted their
       | time writing it and you probably shouldn't waste your time
       | reading it.
       | 
       | In the part I read it claims we can't develop AI because we can't
       | accurately model full reality. There's no argument about what the
       | connection there is, it's just stated.
       | 
       | Kind of obviously, if we assume engaging with reality is
       | necessary to develop intelligence, an artificial intelligence
       | could do so in a similar way we non-artificial ones do, right?
        
         | cscurmudgeon wrote:
         | You are getting it backwards.
         | 
         | There are two flimsy arguments for machine intelligence from
         | Hunter and Brooks. The paper is poking holes in that.
        
           | mcguire wrote:
           | It's not doing a very good job.
        
             | cscurmudgeon wrote:
             | It is doing a job as good as the original arguments.
        
         | User23 wrote:
         | It's not obvious at all. The following isn't a proof and if I
         | had one I'd be publishing elsewhere. However, I believe it
         | suffices to show the lack of obviousness.
         | 
         | All of our models of reality are restricted to computable
         | functions. However, we know that uncomputable functions not
         | only exist, but that nearly all functions are in fact
         | uncomputable. Therefore, it's well within the realm of
         | plausibility that the actual behavior of the universe is
         | governed by uncomputable functions, and we are forever stuck
         | modeling those behaviors with computable approximations.
         | 
         | To claim that reality is entirely computable, one has to show
         | how uncomputable functions can exist therein. I wouldn't call
         | this proof, but it strongly suggests to me that the behavior of
         | the system we call reality is uncomputable, and that subsystems
         | thereof also may not be. If what we call human intelligence is
         | one of those uncomputable subsystems, then it's true that
         | computational AI will never achieve it. Nonetheless, we've
         | gotten pretty far with computable approximations, so machine
         | "intelligence" that's close enough for practical purposes
         | doesn't strike me as impossible even if we inhabit an
         | uncomputable reality.
        
           | jmull wrote:
           | You're making the contradictory claims that human
           | intelligence is incomputable and AI is limited something
           | computable (all by a certain definition of computable).
           | 
           | You need to at least try to propose something that explains
           | the premise and the contradiction.
           | 
           | Does human intelligence arise out of processes within the
           | human brain? If not, then how else? If yes, then why are
           | those processes somehow out of reach of human science to
           | investigate and manipulate?
           | 
           | How can intentional human actions be limited to a certain
           | definition of computability while human intelligence is not?
        
           | simonh wrote:
           | I don't see why we have to compute all of reality to create
           | an AI anyway. I'm intelligent, to a point, and I'm pretty
           | sure I don't compute all reality. My neurological processes
           | simply model enough of reality for me to function more or
           | less effectively, that's all.
           | 
           | The article works from the absurd premise that an AI would
           | have to perfectly model its environment, but no living
           | creatures do this. It also decides that we can't create
           | general AI because we don't know how to do it. Therefore it's
           | impossible. Seriously, it's right there in the conclusion.
        
             | ffwd wrote:
             | One thing I think is interesting about the paper is how
             | complex systems (like the brain) change over _time_ rather
             | than at fixed points, and if those changes are computable
             | in any meaningful way.
             | 
             | Like if we created an AI 100 years ago, could the AI 100
             | years later learn how to use an ipad or understand what
             | twitter is or what a meme is? What if brain changes from
             | cultural (environmental) change are both complex and thus
             | creating mathematical models that would change the
             | intelligence of the AI in the way the brain changes is
             | impossible. Like physical changes in the circuits in the
             | brain that are so distributed, interconnected and
             | complicated and subtle yet very specifically "tailored" to
             | the complex system so as to make them virtually impossible
             | to abstract or model in any way, and thus changes the
             | "mathematical model" of the brain that is sort of
             | "virtualized" at a fixed point in time.
             | 
             | Edit: Well to put it a little more explicitly: What if the
             | real reason brains are intelligent is not because of the
             | brain alone, but also because of the underlying physical
             | systems like molecules, maybe even going all the way down
             | to quantum mechanics and that those lower levels cause
             | changes over time that fundamentally alters the function of
             | the brain but still has the evolutionary potential of the
             | lower level physical stuff.
             | 
             | If you have 2 levels: 1) the brain 2) the underlying
             | physical stuff below neurons
             | 
             | 1 is a virtualized fixed point in time that we can model
             | and 2 is part of a complex system that alters 1 but
             | importantly in a way that cannot be computed without
             | simulating that stuff at the lower level. I feel like this
             | is sort of implied in the article because either
             | intelligence can be abstracted completely accurately or
             | there will (as the paper claims) always be lower level
             | physical changes that alters the intelligence in a way that
             | cannot be computed at the brain/intelligence level. I don't
             | know if this is true though tbh
        
             | checkyoursudo wrote:
             | ^ This is really the point. Human intelligence is based on
             | limited and filtered input, rough analog approximations in
             | processing, and incomplete and interpolated mental
             | representations and internal simulations, and yet nobody
             | seriously denies that we possess some degree of
             | intelligence. I am skeptical that current methods will get
             | us to AGI, but the idea that machines must achieve some
             | level of computational perfection far above and beyond
             | humans is not reasonable.
        
           | mcguire wrote:
           | " _Therefore, it 's well within the realm of plausibility
           | that the actual behavior of the universe is governed by
           | uncomputable functions..._"
           | 
           | It's also well within the realm of plausibility that the
           | behavior of the universe is governed by invisible, intangible
           | unicorns. :-)
           | 
           |  _If_ you can provide an example of something in the universe
           | actually computing an theoretically uncomputable function,
           | then there is a gigantic problem somewhere and everyone is
           | going to have to do some re-thinking. _If._
        
             | User23 wrote:
             | By definition nothing will ever compute an uncomputable
             | function. You're only stating a trivial truth that by
             | itself fails to refute the conjecture that we live in a
             | reality that has behaviors that cannot be accurately
             | described by computable functions and that we are forever
             | stuck with merely approximate computable models.
             | 
             | In fact, so far we're not able to completely
             | computationally predict any behavior of reality. Even the
             | marvelous theory of quantum electrodynamics is only shown
             | to be accurate to, last I knew, about a dozen places.
             | 
             | Due to an unfortunately widespread misunderstanding of the
             | Church-Turing thesis, far too many otherwise intelligent
             | persons with some CS knowledge are completely blinkered to
             | the possibility that the universe could have behaviors that
             | are real, but not describable with computable functions
             | beyond approximately. The practicing laboratory scientists
             | I've spoken with don't generally share that defect since
             | they're used to everything being approximate.
        
           | ericjang wrote:
           | Whether certain behaviors in the universe are uncomputable or
           | not is irrelevant. If we classify "humans" as intelligent and
           | assume no spooky immaterial aspects of consciousness, you
           | already have an existence proof that intelligence is
           | computable.
        
           | naasking wrote:
           | > Therefore, it's well within the realm of plausibility that
           | the actual behavior of the universe is governed by
           | uncomputable functions, and we are forever stuck modeling
           | those behaviors with computable approximations.
           | 
           | Even if the set of uncomputable functions outnumbers the set
           | of of computable functions, I still don't see how your
           | conclusion follows. The rules that govern a coherent universe
           | are not randomly sampled from the set of all functions.
        
           | Tzt wrote:
           | why not make AI on the same platform as the human brain? what
           | is so exceptional about it, and even if it is an exceptional
           | material, why not just use it?
        
         | amelius wrote:
         | Is it though? A creature living in a 4 dimensional world might
         | argue that an AI that was brought up in a 3d world would never
         | truly grasp 4 dimensions. This could be true. So why wouldn't
         | it hold for other aspects where the model is incomplete?
        
         | dane-pgp wrote:
         | I agree that it is nonsense. To save people the click, here,
         | for example, is how the paper argues that a software system
         | couldn't gain intelligence by simulating an evolutionary
         | process:
         | 
         | "But we neither know how to engineer the drive that is built
         | into all animate complex systems, nor do we know how to mimic
         | evolutionary pressure, which we do not understand and cannot
         | model (outside highly artificial conditions such as a Petri
         | dish). In fact, if we already knew how to emulate evolution, we
         | would in any case not need to do this in order to create
         | intelligent life, because the complexity level of intelligent
         | life is lower than that of evolution."
        
           | simonh wrote:
           | They wrote another paper on this topic, the summary of which
           | is that an AI capable of human level conversations is
           | impossible because:
           | 
           | "This is (1) because there are no traditional explicitly
           | designed mathematical models that could be used as a starting
           | point for creating such programs; and (2) because even the
           | sorts of automated models generated by using machine
           | learning, which have been used successfully in areas such as
           | machine translation, cannot be extended to cope with human
           | dialogue. If this is so, then we can conclude that a Turing
           | machine also cannot possess AGI, because it fails to fulfil a
           | necessary condition thereof."
           | 
           | https://arxiv.org/abs/1906.05833
           | 
           | In other words it can't ever be done because we haven't done
           | it yet. QED. How stuff like this gets to come out of U
           | Buffalo is beyond me. At first I suspected it might have come
           | out of a religious think tank, but no.
        
             | The_rationalist wrote:
             | What is the mathematical model used for designing linux or
             | chromium or any kind of hyper-complex software? None, there
             | is no need for a specific mathematical models in general
             | and it's very cringe to claim otherwise. Sure mathematical
             | models can help in the specific and in the general but
             | there is no set of open problems in mathematics that lead
             | to an impossibility of making AGI. Sure p!= np but it would
             | be ridiculous to think that brains can bypass algorithmic
             | complexity. The real problem is that we haven't found the
             | winning lottery ticket software and this software spoiler
             | alert is not at all only made of neural networks
        
             | Stupulous wrote:
             | >If this is so, then we can conclude that a Turing machine
             | also cannot possess AGI, because it fails to fulfil a
             | necessary condition thereof.
             | 
             | If your conclusion implies the existence of computation
             | beyond Turing machines, you should probably find an example
             | or check your assumptions.
        
               | simonh wrote:
               | My conclusion or theirs? I'm not sure what you mean.
        
               | User23 wrote:
               | By definition the computable functions are those that can
               | be computed by a Turing machine or equivalent apparatus,
               | so talking about "computation beyond Turing machines" is
               | incoherent.
               | 
               | However, uncomputable functions not only exist, but are
               | the overwhelming majority of functions. This suggests
               | that at least some aspects of reality might only be
               | accurately described by uncomputable functions. How could
               | uncomputable functions exist in an entirely computable
               | reality?
        
               | CrazyStat wrote:
               | Do uncomputable functions actually "exist" in reality in
               | any meaningful sense, or are they just mathematical
               | abstractions?
        
               | Isinlor wrote:
               | You are asking a question that does not have an answer,
               | because "exist" is not well-defined. But if you want to
               | know whether an arbitrary JavaScript in your browser will
               | stop or not, well you can not know that for all scripts
               | people can throw at you. In this sense uncomputable
               | functions do exist.
               | 
               | BTW - The field trying to deal with defining what exist
               | means is called ontology:
               | 
               | https://en.wikipedia.org/wiki/Ontology
        
               | doganulus wrote:
               | You need to start with the assumption that an infinite
               | set exists (Axiom of Infinity) for uncomputable functions
               | (and other weird mathematical things).
               | 
               | Such concepts cannot appear in a finite universe.
        
               | mcguire wrote:
               | Turing designed his machines to capture "computation" as
               | it is intuitionally understood: as something a
               | mathematician, equipped with an unlimited supply of
               | scratch paper and pencils, could do. (The other
               | equivalent apparatuses are provably equivalent, but it is
               | harder to argue that they capture the correct intuition.)
               | 
               | If you can show something a mathematician can do to
               | compute a function, that cannot be emulated by a Turing
               | machine, then you have demonstrated that Turing's
               | definition does not capture the intuition and we get to
               | start over with the theory of computation. So far, no one
               | has been able to do that.
               | 
               | The existence of uncomputable functions is not itself a
               | problem. It only becomes so if you can show that
               | something computes them.
        
               | User23 wrote:
               | > It only becomes so if you can show that something
               | computes them.
               | 
               | By definition nothing will ever compute an uncomputable
               | function. That's completely irrelevant to what I wrote.
        
               | Isinlor wrote:
               | Not if the definition of computing and uncomputable are
               | from two different models of computation. For example,
               | there are functions uncomputable by finite state machine,
               | but that can be computed by a Turing machine.
               | 
               | One can imagine that someone will find a model of
               | computation that allows to compute more functions than a
               | Turing machine. It's extremely unlikely, but nobody has
               | proven that it is not possible. It may be that our
               | universe is one such model.
        
             | jmull wrote:
             | I'm still thinking there may be a religious angle to this
             | somewhere. It's so full of unquestioned unstated
             | assumptions and pseudo-analysis.
        
               | MetricExpansion wrote:
               | I think any argument against the possibility of
               | developing AGI is going to have it. The argument has to
               | be either:
               | 
               | 1) there's something non-material about human
               | intelligence (basically, there's a soul), or 2) something
               | about the processes that created a completely material
               | human intelligence is impossible _in principle_ to
               | reproduce, either implicitly or explicitly.
               | 
               | (1) has the obvious religious angle, but (2) tends to be
               | what's trotted out when (1) is too overly religious.
               | 
               | With (2), the usual supporting reason is that the
               | conditions are too complex. The problem is that the
               | fundamental rules are just those of physics, which are
               | "simple". And we have to remember that the initial
               | conditions of the universe were also "simple" and not
               | intelligently set up in a way that could be predicted to
               | create intelligence. It was just a bunch of initially
               | formless matter evolving over time.
               | 
               | By closing the door on even implicit use of physics
               | (which created our own intelligence), which we don't know
               | enough to rule out completely, there's the feeling that
               | there's some kind of magic dust that has to be part of
               | the process or the initial conditions. That would
               | disagree with our current understanding of the laws of
               | physics and early development of the universe.
               | 
               | Ultimately, the real motivation is the desire to maintain
               | the feeling that humans are somehow "special" in the
               | universe.
        
       | mcguire wrote:
       | " _Though the infinitesimal definition of utility in (1) and the
       | penalisation of complexity in the definition of U provide a
       | statistically robust measure of the kind of surrogate
       | intelligence those working in the general artificial intelligence
       | (AGI) field have decided to focus on, the definition is too weak
       | to describe or specify the behaviour even of an arthropod. This
       | is not only obvious from the issues already mentioned above, but
       | also from the fact that algorithms which realise the reward-
       | schemes proposed in (1) and (2) (for example, neural networks
       | optimised with reinforcement learning) fail to display the type
       | of generalisable adaptive behaviour to natural environments that
       | arthropods are capable of, for example when ants or termites
       | colonise a house._ "
       | 
       | Ok, I don't like the mathematical definitions of intelligence
       | either (although I might be convincable and they do have some
       | advantages over other definitions I've seen), but this refutation
       | seems to be a prime example of proof-by-assertion.
       | 
       | " _Brooks defines an AI agent, again, as an artefact that is able
       | 'to move around in dynamic environments, sensing the surroundings
       | to a degree sufficient to achieve the necessary maintenance of
       | life and reproduction '._"
       | 
       | And this definition implies many things we know to be intelligent
       | (i.e. people) are not. So there's that.
       | 
       | " _There are three additional properties of logic systems of
       | importance for our argument here: 1. Their phase space is fixed.
       | 2. Their behaviour is ergodic with regard to their main
       | functional properties. 3. Their behavior is to a large extent
       | context-independent._ "
       | 
       | Aaaaand here we go...
       | 
       | " _As we learn from we standard mathematical theory of complex
       | systems [23], all such systems, including the systems of complex
       | systems resulting from their interaction, 1. have a variable
       | phase space, 2. are non-ergodic, and 3. are context-dependent._ "
       | 
       | Ok, to the extent that the first statement is true about "logic
       | systems", it is also true about any physically realizable,
       | material system. On the other hand, the "complex system", to that
       | same extent, is _not_ physically realizable. (Consider  "a
       | variable phase space means that the variables which define the
       | elements of a complex system can change over time" or "a non-
       | ergodic system produces erratic distributions of its elements. No
       | matter how long the system is observed, no laws can be deduced
       | from observing its elements." and question _how much information_
       | is required for this in the authors ' sense.)
       | 
       | And there we have the intrusion of the immortal soul into the
       | argument that artificial intelligence is impossible.
        
       | tehchromic wrote:
       | It's not likely to be a popular opinion with technologists as
       | AI's potential has lit the technopopular imagination, however
       | this question has bothered me for a long time. I think strong
       | emergent AI suffers philosophical problem that won't go away, and
       | to the extent that the conversation revolves around evolution and
       | consciousness rather than logic and intelligence, then we are
       | having the right conversation.
       | 
       | I'll put my argument out there and let the flames come as they
       | will.
       | 
       | Strong AI is about as likely to emerge from our current state of
       | the art AI machinery as it is to emerge suddenly out of moon
       | rocks. That's to say the fear of machines becoming self-conscious
       | and posing an existential threat to us, especially replacing us
       | in the evolutionarily sense, is completely unfounded.
       | 
       | This isn't to say that building machines capable of doing exactly
       | that isn't possible - we and all living things are proof that
       | it's possible - it's to say that achieving this level of
       | engineering is on par with intergalactic mass transit or Dyson
       | spheres - way out of our league for the foreseeable. And, even if
       | we had the technology, it would be so entirely foolish to
       | undertake that no sentient species would do it.
       | 
       | That said, there's a substantial argument to make that we will
       | augment ourselves with our own machinery so throughoughly that we
       | will become unrecognizable and in effect, accomplish the same
       | task through merging with the machine. This is likely, but not at
       | all to be like the experience of the singularity in that all of
       | humanity is suddenly arrested and deposed by autonomous AI.
       | 
       | An interesting scenario in this vein is if a few powerful
       | individuals can wield autonomous systems, modify themselves and
       | simply wipe out all the competition, then in effect the rest of
       | us wouldn't know the difference. This outcome is actually I think
       | on the more likely side, albeit a good ways away in the future.
       | 
       | Less likely but still totally legitimate as a concern is the idea
       | that AI could be very easily weaponized. This is a real problem
       | and is I think behind the more substantive warnings by good
       | thinkers on the topic. Like bioweapons, we might be wiped out by
       | an machine that's been intentionally programmed and mechanically
       | empowered to cause real harm. This kind of danger could also be
       | emergent, in that a machine might be capable of deciding that it
       | ought to take certain actions as well as have the capacity to
       | take them, and then, voila, mass murder.
       | 
       | However it seems unlikely that such a mistake would be made, or
       | that a bad actor would be capable to commit such an intentional
       | crime. I think this is on par with nuclear MAD: even total madmen
       | dictators hit the pause on the push-the-button instinct. And an
       | AI MAD or similar would surely take as much resource to produce
       | as a nuke arsenal. In other words, the resources required to
       | build such a machinery are on the order of a nation-state, and
       | perhaps more complicated to achieve than a nuclear arsenal, so
       | probably more likely to be stopped or fail in-process rather than
       | succeed.
       | 
       | So there are dangers from AI but I would say they are lesser than
       | the accumulated danger of industrial society rendering they
       | planet uninhabitable, which should of course occupy our primary
       | concern these days.
       | 
       | The idea that the biological evolutionary 'machine' whose motive
       | for existence is accumulated over billions of years of entropic
       | adaptation can be out engineered, or accidently replicated by
       | modern computational AI is silly - the two aren't in the same
       | league and it's hubris to suppose otherwise. There's more
       | intelligence in the toe of a lady bug than in an the computing
       | power ever made.
       | 
       | In sum the danger from emergent AI is overstated, however the
       | concern is most welcome to the extent that it informs wisdom and
       | care in consideration for our techno-industrial impact on the
       | biosphere.
        
       | a-dub wrote:
       | > But we neither know how to engineer the drive that is built
       | into all animate complex systems, nor do we know how to mimic
       | evolutionary pressure, which we do not under- stand and cannot
       | model (outside highly artificial conditions such as a Petri
       | dish). In fact, if we already knew how to emulate evolution, we
       | would in any case not need to do this in order to create
       | intelligent life, because the complexity level of intelligent
       | life is lower than that of evolution. This means that emulating
       | intelligence would be much easier than emulating evolution en
       | bloc. Chalmers is, therefore, wrong. We cannot engineer the
       | conditions for a spontaneous evolution of intelligence.
       | 
       | this is the thing i've always sort of loved about philosophy.
       | they just kinda make shit up, provide their own definitions that
       | are rooted in a bamboozling by use of flowery language, and then
       | once they've stated all their definitions with their conclusions
       | baked in, they hop, skip and jump down the path which now
       | obviously leads to the conclusion they started with.
       | 
       | it's kind of like a form of mathematics where they define their
       | own first principles in each argument with the express purpose of
       | trying to build the most beautiful path to their conclusions. it
       | really is a beautiful form of art, like architecture for ideas.
        
         | Rd6n6 wrote:
         | You can't pick your favourite bad argument and ridicule an
         | entire field. You are incidentally using ideas from several
         | different old, influential philosophies to even formulate
         | Luther comment
         | 
         | Philosophy includes questions like "how do we decide whether
         | something is true or trustworthy," or "what constitutes a good
         | or a bad way to make a case for something." If you're going to
         | throw philosophy out, you can't question anything any more
        
           | imbnwa wrote:
           | Science used to be called 'natural philosophy'
        
           | threatofrain wrote:
           | Philosophy may refer to the specific branch in academia and
           | its current practice, as opposed to _any_ philosophical
           | inquiry. Every field already pursues their own philosophical
           | inquiry, and yet philosophers and mathematicians are in
           | separate departments. Such is the current practice and
           | organization of academics.
           | 
           | If we were to consider mathematics and computer science as
           | part of philosophy, then we might say that as a mode of
           | inquiry, philosophy has had great success in achieving
           | multidisciplinary consensus and international impact. But if
           | we were to consider philosophy as a specific branch of
           | academic organization, then we might be disappointed at the
           | fruits emerging from that field.
        
             | a-dub wrote:
             | to bring it full circle and tear apart the original
             | argument above, one could argue that the relatively simple
             | laws of logic from philosophy give rise to all of digital
             | computing (like... what is directly expressed in digital
             | logic design). yet the emergent complexity of all
             | computerdom is far beyond the complexity of the basic rules
             | of logic coming from philosophy.
             | 
             | more to the point here, computer science and mathematics
             | are very similar to philosophy in that authors invent a set
             | of abstractions and then construct rules for how they
             | interact in a self-consistent manner.
        
               | threatofrain wrote:
               | Relating computer science and mathematics to philosophy
               | is already a fair and understandable argument. Yet, as
               | _separate divisions_ of academic professional labor, why
               | do the fruits they bear look so different in terms of
               | their ability to generate frameworks for multi-
               | disciplinary consensus and international impact?
        
           | a-dub wrote:
           | fair, it's the first step towards trying to construct
           | formalism around reasoning... but then it often jumps off
           | into the abstract based on synthetic premises... that's when
           | it makes the leap to art.
           | 
           | and who said anything about ridicule? art is important!
           | perceptive exercises and exploration of ideas strengthen our
           | skills for reasoning.
        
         | [deleted]
        
         | Atreiden wrote:
         | Funny, that's the exact reason I hate philosophy. And I say
         | this as someone with a BA in it.
         | 
         | I thought of it initially as a useful way to model the
         | abstract, the hypothetical, and the integrity our own ideas and
         | perceptions.
         | 
         | But so many philosophers tried to use their arguments to prove
         | things about the world. Like a less powerful form of economics,
         | which itself is based on the "if we model X this way, Y"
         | mindset.
         | 
         | I like your conceptualization of philosophy as art. I'll
         | probably to refer to it that way hereon.
        
           | mistermann wrote:
           | I think of it a bit like science vs _scientific thinking_
           | (that isn 't constrained to actual science and logic) that
           | one encounters on the internet. The problem is not with
           | philosophy, it is with humans.
        
         | dandotway wrote:
         | The excerpt you quoted is perfectly meaningful. They are saying
         | an evolutionary process that produces intelligent life is more
         | complicated ("has more moving parts") than the intelligent life
         | forms thus produced. How could this not be true? An intelligent
         | life form may have hundreds of billions of neurons and
         | trillions of cells, but the evolution that produced said life
         | form involved untold zillions of complex life forms over
         | billions of years. There are over 10-to-the-power-of-30
         | microbes on planet earth right now, evolving in ways currently
         | beyond the understanding of any computational biologist.
         | 
         | Although computer scientists can use genetic algorithms
         | inspired by evolution to "breed" better backgammon algorithms,
         | this is quite a few orders of magnitude simpler than emulating
         | a true evolution of intelligent biological life.
         | 
         | The point is intelligent biological life forms are less
         | complicated than the "factory" that produced them.
        
           | MathYouF wrote:
           | The main thesis of the life's work of Stephen Wolfram is the
           | idea that simple processes can produce greater complexity
           | than the process itself.
           | 
           | https://www.wolframalpha.com/examples/science-and-
           | technology...
        
             | anthk wrote:
             | That thesis has been critisized as being "non scientific".
             | 
             | It maybe looks as an advertisement for Mathematica.
        
             | minihat wrote:
             | It seems problematic to disentangle the complexity of an
             | entity from the complexity of the process which produced
             | it. If we define complexity as the Kolmogorov complexity,
             | the two are equivalent (https://en.wikipedia.org/wiki/Kolmo
             | gorov_complexity?wprov=sf...)
             | 
             | Rather, I interpret Wolfram's idea as:
             | 
             | Surprisingly complex patterns can be produced by
             | simple/concise rules.
             | 
             | In my interpretation, the ultimate example of this would be
             | the unfolding of everything that has ever happened as the
             | consequence of the laws of physics, and some initial
             | condition of the universe.
        
           | a-dub wrote:
           | > They are saying an evolutionary process that produces
           | intelligent life is more complicated ("has more moving
           | parts") than the intelligent life forms thus produced. How
           | could this not be true?
           | 
           | because we don't know enough about evolutionary processes nor
           | intelligent life to make statements like that, and "more
           | complicated" is completely ill-defined.
           | 
           | how do we know that evolution isn't simply a few basic rules,
           | a lot of randomness and a lot of time?
        
             | _vertigo wrote:
             | Yup, you nailed it. We have a lot of examples of complexity
             | arising from a very simple set of initial conditions and
             | rules. Why should evolution be any different?
        
               | a-dub wrote:
               | i suppose, giving the authors the benefit of the doubt,
               | one could make a statement like:
               | 
               | if one were to parameterize an entire line of evolution
               | over time, and one were to parameterize a single
               | intelligent being over time, then it is likely that the
               | number of bits required to describe that evolutionary
               | line (and the space of all evolutionary lines) is greater
               | than the number of bits required to completely describe a
               | single intelligent life form over time.
               | 
               | this still tells us nothing about the rules behind
               | evolution, how an intelligence actually works, how
               | evolution actually works and what would be necessary to
               | manifest an intelligence.
        
           | dnautics wrote:
           | It begs the question though. The argument is: Assume you
           | can't make artificial intelligence without the stimuli of the
           | real world's processes. Since a single human is less
           | complicated than the real world, therefore you can't make
           | artificial intelligence.
        
           | TrainedMonkey wrote:
           | I think the devil is in one important detail, namely - how do
           | we define complexity?
           | 
           | Let's define the problem as Evolution(inputs) = Intelligence.
           | The claim is that complexity(function) + complexity(inputs) >
           | complexity(outputs). Now to show that the parent claim is not
           | necessarily true (which is not the same as proving that it is
           | false), we just need to show that there exists a combination
           | of complexity function and a system that does not satisfy the
           | above constraints.
           | 
           | 1. Let's examine information compressibility as a complexity
           | function. There are a few examples of where a simple set of
           | rules and inputs could produce basically an infinite stream
           | of incompressible information. Examples include Conway game
           | of life, double pendulum, fractals, all irrational numbers,
           | etc...
           | 
           | 2. Now to tie that back to Evolution, the authors avoid
           | defining evolution or its inputs, which means they could be
           | quite simple yet produce mind boggling complexity. Therefore
           | the argument that the evolution must be more complex than
           | intelligent life is backwards (if you buy my complexity
           | definition anyhow :P).
           | 
           | 3. Of course this kind breaks down if we discretize evolution
           | because at that point all of the existing life is an input
           | into the evolution. So complexity(evolution) +
           | complexity(life[t] + environment[t]) > complexity(life[t+1])
           | is obviously true for some t and t + 1. For example if t is
           | right before a mass extinction event and t+1 is right after.
           | 
           | This is somewhat unrelated, but I am quite partial of theory
           | that life in general and intelligence in particular is driven
           | by entropy. Or maybe less confusingly (because who the hell
           | knows what entropy is) is driven by macro tendency of
           | everything towards lowest energy states. Life in this case is
           | smart matter that bridges activation energy gap to extract
           | available energy gradients as fast as possible. Here is the
           | concept explained by people who put a lot more thought into
           | it: https://www.quantamagazine.org/a-new-thermodynamics-
           | theory-o...
        
             | mistermann wrote:
             | Would the notion of fractals not seem like a plausible
             | component of smart matter?
        
           | alanbernstein wrote:
           | > They are saying an evolutionary process that produces
           | intelligent life is more complicated ("has more moving
           | parts") than the intelligent life forms thus produced. How
           | could this not be true?
           | 
           | The whole field of emergent complexity exists to answer
           | questions about this. Questions which only exist because
           | there are many situations where "this" is evidently not true.
        
           | mannykannot wrote:
           | You are right that the quoted passage is not without meaning;
           | what is missing here (and by "here", I mean the whole paper)
           | is any remotely good argument from the relatively trivial
           | factual claim in this passage, to the conclusion that true
           | artificial intelligence is impossible.
           | 
           | The factual claim is about the _history_ of evolution: if we
           | take that history to include everything produced during that
           | history, then it is trivially true that the whole is greater
           | than any subset of the things it produced - but so what? It
           | is true for the creation of a microprocessor as well. There
           | is no argument here that rules out the creation of artificial
           | intelligence that does not also apply to the creation of
           | microprocessors.
        
       ___________________________________________________________________
       (page generated 2021-11-20 23:01 UTC)