[HN Gopher] Godel's theorem debunks the most important AI myth -...
       ___________________________________________________________________
        
       Godel's theorem debunks the most important AI myth - Roger Penrose
       [video]
        
       Author : Lockal
       Score  : 40 points
       Date   : 2025-03-02 18:31 UTC (4 hours ago)
        
 (HTM) web link (www.youtube.com)
 (TXT) w3m dump (www.youtube.com)
        
       | tmnvix wrote:
       | Is anyone aware of some other place where Penrose discusses AI
       | and consciousness? Unfortunately here, the interviewer seems well
       | out of their depth and repeatedly interrupts with non sequiturs.
        
         | dang wrote:
         | It's painful, but listening to Penrose is worth it and (in the
         | bits I watched) he somehow manages to politely stick to his
         | thread despite the interruptions.
        
         | codeulike wrote:
         | The Emperors New Mind - Roger Penrose - published 1989
         | 
         | Shadows Of The Mind - Roger Penrose - published 1994
         | 
         | https://en.wikipedia.org/wiki/Penrose%E2%80%93Lucas_argument
        
         | pfortuny wrote:
         | The emperor's new mind is his work on this (not specifically on
         | LLMs obviously).
        
       | mitthrowaway2 wrote:
       | For those of us without time to watch a video - what is the most
       | important AI myth?
        
         | exe34 wrote:
         | that carbon chauvinism isn't real.
        
         | qrios wrote:
         | The question is answered in the full title:
         | 
         | > "Godel's theorem debunks the most important AI myth. AI will
         | not be conscious"
         | 
         | Same statement from Penrose here with Lex Friedman:
         | "Consciousness is Not a Computation" [1].
         | 
         | [1] https://www.youtube.com/watch?v=hXgqik6HXc0
        
       | drivebyhooting wrote:
       | I feel like Penrose presupposes the human mind is non computable.
       | 
       | Perhaps he and other true geniuses can understand things
       | transcendently. Not so for me. My thoughts are serialized and
       | obviously countable.
       | 
       | And in any case: any kind of theorem or idea communicated to
       | another mathematician needs to be serialized into language which
       | would make it computable. So I'm not convinced I could be
       | convinced without a computable proof.
       | 
       | And finally just like computable numbers are dense in the reals,
       | maybe computable thoughts are dense in transcendence.
        
         | pfortuny wrote:
         | I think (but may be wrong) that you are thinking
         | metamathematics is a part of mathematics, which (to my
         | knowledge) it is not.
        
         | cwillu wrote:
         | He explicitly believes that, yes.
        
         | ForTheKidz wrote:
         | This is accurate from his Emperor's New Mind. Penrose
         | essentially takes for granted that human brains can reason
         | about or produce results that are otherwise uncomputable. Of
         | course, you can reduce all (historical) human reasoning to a
         | computable heuristic as it is finite, but for some reason he
         | just doesn't see this.
         | 
         | His intent at the time was to open a physical explanation for
         | free will by taking the recourse to quantum nano-tubules
         | magnifying true randomness to the level of human cognition. As
         | much as I'm also skeptical that this actually moves the needle
         | on whether or not we have free will (...vs occasionally having
         | access to statistically-certain nondeterminism? Ok...) the
         | computable stuff was just in service of this end.
         | 
         | I strongly suspect he just hasn't grasped how powerful
         | heuristics are at overcoming general restrictions on
         | computation. Either that or this is an ideological commitment.
         | 
         | Kind of sad--penrose tilings hold a special place in my heart.
        
           | drivebyhooting wrote:
           | If stories are to be believed real geniuses can tap into
           | God's mind. (See Ramanujan)
           | 
           | If so then it really comes down to believing something not
           | because you can prove it but because it is true.
           | 
           | I'm just a mediocre mathematician with rigor mortis. So I
           | won't be too hard on Penrose.
        
           | tbrownaw wrote:
           | > _His intent at the time was to open a physical explanation
           | for free will by taking the recourse to quantum nano-tubules
           | magnifying true randomness to the level of human cognition.
           | As much as I 'm also skeptical that this actually moves the
           | needle on whether or not we have free will (...vs
           | occasionally having access to statistically-certain
           | nondeterminism? Ok...) the computable stuff was just in
           | service of this end._
           | 
           | Free will is a useful abstraction. Just like life and
           | continuity of self are.
           | 
           | > _I strongly suspect he just hasn 't grasped how powerful
           | heuristics are at overcoming general restrictions on
           | computation._
           | 
           | Allowing approximations or "I don't know" is what's helpful.
           | The bpf verifier can work despite the halting problem being
           | unsolvable, not because it makes guesses (uses heuristics)
           | but because it's allowed to lump in "I don't know" with "no".
        
             | throwup238 wrote:
             | _> Free will is a useful abstraction. Just like life and
             | continuity of self are._
             | 
             | I think it's more useful to think of them as language games
             | (in the Wittgenstein sense) than abstractions.
        
           | Animats wrote:
           | > Penrose essentially takes for granted that human brains can
           | reason about or produce results that are otherwise
           | uncomputable.
           | 
           | That's Penrose's old criticism. We're past that. It's the
           | wrong point now.
           | 
           | Generative AI systems are quite creative. Better than the
           | average human at art. LLMs don't have trouble blithering
           | about advanced abstract concepts. It's concrete areas where
           | these systems have trouble, such as arithmetic. Common sense
           | is still tough. Hallucinations are a problem. Lying is a
           | problem. None of those areas are limited by computability.
           | It's grounding in the real world that's not working well.
           | 
           | (A legit question to ask today is this: We now know how much
           | compute it takes to get to the Turing test level of faking
           | intelligence. How do biological brains, with such a slow
           | clock rate, do it? That was part of the concept behind
           | "nanotubules". Something in there must be running fast,
           | right?)
        
             | lowbloodsugar wrote:
             | > It's concrete areas where these systems have trouble,
             | such as arithmetic. Common sense is still tough.
             | Hallucinations are a problem. Lying is a problem
             | 
             | Gestures broadly at humanity
        
             | Tuna-Fish wrote:
             | > Something in there must be running fast, right?
             | 
             | Nah. It just needs to be really wide. This is a very fuzzy
             | comparison, but a human brain has ~100 trillion synaptic
             | connections, which are the closest match we have to
             | "parameters" in AI models. The largest such models
             | currently have on the order of ~2 trillion parameters.
             | (edit to add: and this is a low end estimate of the
             | differences between them. There might be more stuff in
             | neurons that effectively acts as parameters, and should be
             | counted as such in a comparison.)
             | 
             | So AI models are still at least two orders of magnitude off
             | from humans in pure width. In contrast, they run much, much
             | faster.
        
         | sampo wrote:
         | > I feel like Penrose presupposes the human mind is non
         | computable.
         | 
         | Yes. He has also written books about it.
         | 
         | https://en.wikipedia.org/wiki/Roger_Penrose#Consciousness
        
         | thrance wrote:
         | What is "understanding transcendently"? Just because Penrose is
         | an authority on some subjects in theoretical physics doesn't
         | mean he is a universal genius and that his ideas on
         | consciousness or AI hold any value.
         | 
         | We gotta stop making infaillible super heroes/geniuses of
         | people.
         | 
         | In this particular case, Penrose is a convinced dualist and his
         | theories are unscientific. There are very good reasons to not
         | be a dualist, a minority view in philosophy, which I would
         | encourage anyone to seek if they want to better understand
         | Penrose's position and where it came from.
        
         | saulpw wrote:
         | > My thoughts are serialized and obviously countable.
         | 
         | You might want to consider doing a bit of meditation...anyone
         | who describes their thoughts as 'serialized' and 'obviously
         | countable' has not much time actually looking at their
         | thoughts.
        
         | AlexCoventry wrote:
         | I watched half of the video. He keeps appealing to the idea
         | that Goedel applies to AI because AI doesn't understand what
         | it's doing. But I seriously doubt that we humans really know
         | what we're doing, either.
         | 
         | IIRC, his Goedel argument against AI is that someone could
         | construct a Goedel proposition for an intelligent machine which
         | that machine could reason its way through to hit a
         | contradiction. But, at least by default, humans don't base
         | their epistemology on such reasoning, and I don't see why a
         | conscious machine would either. It's not ideal, but frankly,
         | when most humans hit a contradiction, they usually just ignore
         | whichever side of the contradiction is most inconvenient for
         | them.
        
         | cowl wrote:
         | > any kind of theorem or idea communicated to another
         | mathematician needs to be serialized into language which would
         | make it computable.
         | 
         | This is a fallacy. Just because you need to serialize a concept
         | to communicate it doesnt mean the concept itself is computable.
         | This is established and well proven:
         | 
         | https://en.wikipedia.org/wiki/List_of_undecidable_problems
         | 
         | The fact that we can come up with this kind of uncumputable
         | problems is a big plus in supprt of Penrose's Idea that
         | consciousnes is not computable and goes way beyond
         | compatability.
        
           | drivebyhooting wrote:
           | Deciding an undecidable problem is well, undecidable. But
           | describing it is clearly not. Otherwise we would not have
           | been able to write about it.
        
         | gizajob wrote:
         | From where do those serialised thoughts arise?
        
       | CityOfThrowaway wrote:
       | He sets up a definition where "real intelligence" requires
       | consciousness, then argues AI lacks consciousness, therefore AI
       | lacks real intelligence. This is somewhat circular.
       | 
       | The argument that consciousness can't be computable seems like a
       | stretch as well.
        
         | pfortuny wrote:
         | I do not see the "circularity", it may lack foundation, but
         | that is a different argument.
        
         | zuhsetaqi wrote:
         | Where's the circle?
        
         | sampo wrote:
         | Penrose believes that consciousness originates from quantum
         | mechanics and the collapse of the wavefunction. Obviously you
         | couldn't (effectively) simulate that with a classical computer.
         | It's a very unconventional position, but it's not circular.
         | 
         | https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind
         | 
         | https://en.wikipedia.org/wiki/Shadows_of_the_Mind
        
         | James_K wrote:
         | Consciousness is not a result, it cannot be computed. It is a
         | process, and we don't know how it interacts with computation.
         | There are only two things I can really say about consciousness,
         | and both are speculation: I think it isn't observable, and I
         | think it is not a computation. For the first point, I can see
         | no mechanism by which consciousness could affect the world so
         | there is no way to observe it. For the second, imagine a man in
         | a vast desert filled only with a grid of rocks that have two
         | sides, a dark and light side and he has a small book which
         | gives him instructions on how to flip these rocks. It seems
         | unlikely that the rocks are sentient, yet certain
         | configurations of rocks and books could produce the thought
         | computation of the human mind. When does the sentience happen?
         | If the man flips only a single rock according to those rules,
         | would the computer be conscious? I doubt it. Does the
         | consciousness exist between the flips of rock when he walks to
         | the next stone? The idea that computation creates consciousness
         | seems plainly untenable to me.
        
           | prmph wrote:
           | Indeed, I also think consciousness cannot be reduced to
           | computation.
           | 
           | Here is one more thing to consider. All consciousness we can
           | currently observe is embodied; all humans have a body and
           | identity. We can interact with separate people corresponding
           | to separate consciousnesses.
           | 
           | But if computation is producing consciousness, how is its
           | identity determined? Is the identity of the consciousness
           | based on the set of chips doing the computation? It is based
           | on the algorithms used (i.e., running the same algorithm
           | anywhere animates the same consciousness)?
           | 
           | In your example, if we say that consciousness somehow arises
           | from the computation the man performs itself, then a question
           | arises: what exactly is conscious in this situation? And what
           | are the boundaries of that consciousness? Is the set of rocks
           | as a whole? Is it the computation they are performing itself?
           | Does the consciousness has a demarcation in space and time?
           | 
           | There are no satisfying answers to these questions if we
           | assume mere computation can produce consciousness.
        
       | aljarry wrote:
       | LLMs (our current "AI") doesn't use logical or mathematical rules
       | to reason, so I don't see how Godel's theorem would have any
       | meaning there. They are not a rule-based program that would have
       | to abide by non-computability - they are non-exact statistical
       | machines. Penrose even mentions that he hasn't studied them, and
       | doesn't exactly know how they work, so I don't think there's much
       | substance here.
        
         | pelario wrote:
         | Despite the appearance, they do: despite the training, neurons,
         | transformers and all, ultimately it is a program running in a
         | turing machine.
        
           | aljarry wrote:
           | But it is only a program computing numbers. The code itself
           | has nothing to do with the reasoning capabilities of the
           | model.
        
         | whilenot-dev wrote:
         | > LLMs (our current "AI") doesn't use logical or mathematical
         | rules to reason.
         | 
         | I'm not sure I can follow... what exactly is decoding/encoding
         | if not using logical and mathematical rules?
        
           | aljarry wrote:
           | Good point, I meant the reasoning is not encoded like a
           | logical or mathematical rules. All the neural networks and
           | related parts rely on e.g. matrix multiplication which works
           | by mathematical rules, but the models won't answer your
           | questions based on pre-recorded logical statements, like
           | "apple is red".
        
         | kadoban wrote:
         | Pick a model, a seed, a temperature and fix some floating-point
         | annoyances and the output is a deterministic algorithm from the
         | input.
        
           | layble wrote:
           | Maybe consciousness is just what lives in the floating-point
           | annoyances
        
           | aljarry wrote:
           | That's true with any neural network or ML model. Pick a few
           | points, use the same algorithm with the same hyperparameters
           | and random seed, and you'll end up with the same result.
           | Determinism doesn't mean that the "logic" or "reason" is an
           | effect of the algorithm doing the computations.
        
         | northern-lights wrote:
         | If it is running on a computer/Turing machine, then it is
         | effectively a rule-based program. There might be multiple steps
         | and layers of abstraction until you get to the rules/axioms,
         | but they exist. The fact they are a statistical machine,
         | intuitively proves this, because - statistical, it needs to
         | apply the rules of statistics, and machine - it needs to apply
         | the rules of a computing machine.
        
       | moefh wrote:
       | This argument by Penrose using Godel's theorem has been discussed
       | (or, depending on who you ask, refuted) before in various places,
       | it's very old. The first time I've seen it was in Hofstadter's
       | "Godel, Escher, Bach", but a more accessible version is this
       | lecture[1] by Scott Aaronson. There's also an interview with
       | Aaronson with Lex Friedman where he talks about it some more[2].
       | 
       | Basically, Penrose's argument hinges on Godel's theorem showing
       | that a computer is unable to "see" that something is true without
       | being able to prove it (something he claims humans are able to
       | do).
       | 
       | To see how the argument makes no sense, one only has to note that
       | even if you believe humans can "see" truth, it's undeniable that
       | sometimes humans can also "see" things that are not true (i.e.,
       | sometimes people truly believe they're right when they're wrong).
       | 
       | In the end, stripping away all talk about consciousness and other
       | stuff we "know" makes humans different from machines, and confine
       | the discussion entirely over what Godel's theorem can say about
       | this stuff, humans are no different from machines, and we're left
       | with very little of substance: both humans and computers can say
       | things that are true but unprovable (humans can "see" unprovable
       | truths, and LLMs can hallucinate), and both also sometimes say
       | things that are wrong (humans are sometimes wrong, and LLMs
       | hallucinate).
       | 
       | By the way "LLMs hallucinate" is a modern take on this: you just
       | need a computer running a program that answers something that is
       | not computable (to make interesting, think of a program that
       | randomly responds "halts" or "doesn't halt" when asked whether
       | some given Turing machine halts).
       | 
       | (ETA: if you don't find my argument convincing, just read
       | Aaronson's notes, they're much better).
       | 
       | [1] https://www.scottaaronson.com/democritus/lec10.5.html
       | 
       | [2] https://youtu.be/nAMjv0NAESM?si=Hr5kwa7M4JuAdobI&t=2553
        
         | derriz wrote:
         | I think you're being overly dismissive of the argument.
         | Admittedly my recollection is hazy but here goes:
         | 
         | Computers are symbol manipulating machines and moreover are
         | restricted to a finite set of symbols (states) and a finite set
         | of rules for their transformation (programs).
         | 
         | When we attempt to formalize even a relatively basic branch of
         | human thinking, simple whole-number arithmetic, as a system of
         | finite symbols and rules, then Goedel's theorem kicks in. Such
         | a system can never be complete - i.e. there will always be
         | holes or gaps where true statements about whole-number
         | arithmetic cannot be reached using our symbols and rules, no
         | matter how we design the system.
         | 
         | We can of course plug any holes we find by adding more rules
         | but full coverage will always evade us.
         | 
         | The argument is that computers are subject to this same
         | limitation. I.e. no matter how we attempt to formalize human
         | thinking using a computer - i.e. as a system of symbols and
         | rules, there will be truths that the computer can simply never
         | reach.
        
           | moefh wrote:
           | > Computers are symbol manipulating machines and moreover are
           | restricted to a finite set of symbols (states) and a finite
           | set of rules for their transformation (programs).
           | 
           | > [...] there will be truths that the computer can simply
           | never reach.
           | 
           | It's true that if you give a computer a list of consistent
           | axioms and restrict it to only output what their logic rules
           | can produce, then there will be truths it will never write --
           | that's what Godel's Incompleteness Theorem proves.
           | 
           | But those are not the only kinds of programs you can run on a
           | computer. Computers can (and routinely do!) output
           | falsehoods. And they can be inconsistent -- and so Godel's
           | Theorem doesn't apply to them.
           | 
           | Note that nobody is saying that it's definitely the case that
           | computers and humans have the same capabilities -- it MIGHT
           | STILL be the case that humans can "see" truths that computers
           | will never be able to. But this argument involving Godel's
           | theorem simply doesn't work to show that.
        
         | gmuslera wrote:
         | I've read from Hoftadter "I am a strange loop" that should go
         | around those ideas too. The point of how you define
         | consciousness (he does it in a more or less computable way, a
         | sort of self-referential loop), so it may be within the reach
         | of what we are doing with AIs.
         | 
         | But in any case, it is about definitions, not having very
         | strict ones for consciousness, intelligence and so on, and
         | human perception and subjectivity (the Turing Test is not so
         | much about "real" consciousness but if an observer can decide
         | if is talking with a computer or a human).
        
         | pwdisswordfishz wrote:
         | Any theory which purports to show that Roger Penrose is able to
         | "see" the truth of the consistency of mathematics has got to
         | explain Edward Nelson being able to "see" just the opposite.
        
       | thrance wrote:
       | Penrose is a dualist, he believes the mind is detached from the
       | material world.
       | 
       | He has been desperately seeking proof of quantum phenomenons in
       | the brain, so he may have something to point to when asked how
       | this mind, supposedly external to the physical realm, can pilot
       | our bodies.
       | 
       | I am not a dualist, and I don't think what Penrose has to say
       | about AI or consciousness holds much value.
        
       | estebarb wrote:
       | I'm not sure about it makes sense to apply Godel's theorem to AI.
       | Personally, I prefer to think about it in terms of basic
       | computability theory:
       | 
       | We think, that is a fact.
       | 
       | Therefore, there is a function capable of transforming
       | information into "thinked information", or what we usually call
       | reasoning. We know that function exists, because we ourselves are
       | an example of such function.
       | 
       | Now, the question is: can we create a smaller function capable of
       | performing the same feat?
       | 
       | If we assume that that function is computable in the Turing sense
       | then, kinda yes, there are an infinite number of turing machines
       | that given enough time will be able to produce the expected
       | results. Basically we need to find something between our own
       | brain and the Kolmogorov complexity limit. That lower bound is
       | not computable, but given that my cats understands when we are
       | discussing to take them to the vet then... maybe we don't really
       | need a full sized human brain for language understanding.
       | 
       | We can run Turing machines ourselves, so we are at least Turing
       | equivalent machines.
       | 
       | Now, the question is: are we at most just Turing machines or
       | something else? If we are something else, then our own CoT won't
       | be computable, no matter how much scale we throw at it. But if we
       | are then it is just matter of time until we can replicate
       | ourselves.
        
         | thrance wrote:
         | Penrose is a dualist, he does not believe that function can be
         | computed in our physical universe. He believes the mind comes
         | from another realm and "pilots" us through quantum phenomenons
         | in the brain.
        
           | falcor84 wrote:
           | Interesting. Does that fit with the simulation hypothesis?
           | That the world's physics are simulated on one computer, but
           | us characters are simulated on different machines, with some
           | latency involved?
        
             | mcgee21 wrote:
             | Its all pop pseudoscience. Things exist. Anything that
             | exists has an identity. Physics exists and other things
             | (simulations, computing, etc.) that exist are subject to
             | those physics. To say that it happens the other way around
             | is poor logic and/or lacks falsifiability.
        
           | bbor wrote:
           | Which is--to use the latest philosophy lingo--dumb. To be
           | fair to Penrose, the "Godel's theory about formal systems
           | proves that souls exist" is an extremely common take; anyone
           | following LLM discussions has likely seen it rediscovered at
           | least once or twice.
           | 
           | To pull from the relevant part of Hofstadter's incredible _I
           | am a Strange Loop_ (a book also happens to more rigorously
           | invoke Godel for cognitive science):                 And this
           | is our central quandary. Either we believe in a nonmaterial
           | soul that lives outside the laws of physics, which amounts to
           | a nonscientific belief in magic, or we reject that idea, in
           | which case the eternally beckoning question "What could ever
           | make a mere physical pattern be me?"       After all, a
           | phrase like "physical system" or "physical substrate" brings
           | to mind for most people... an intricate structure consisting
           | of vast numbers of interlocked wheels, gears, rods, tubes,
           | balls, pendula, and so forth, even if they are tiny,
           | invisible, perfectly silent, and possibly even probabilistic.
           | Such an array of interacting inanimate stuff seems to most
           | people as unconscious and devoid of inner light as a flush
           | toilet, an automobile transmission, a fancy Swiss watch
           | (mechanical or electronic), a cog railway, an ocean liner, or
           | an oil refinery. Such a system is not just probably
           | unconscious, it is *necessarily* so, as they see it. This is
           | the kind of single-level intuition so skillfully exploited by
           | John Searle in his attempts to convince people that computers
           | could never be conscious, no matter what abstract patterns
           | might reside in them, and could never mean anything at all by
           | whatever long chains of lexical items they might string
           | together.
           | 
           | Highly recommend it for anyone who liked _Godel, Escher,
           | Bach_ , but wants more explicit scientific theses! He
           | basically wrote it to clarify the more artsy/rhetorical
           | points made in the former book.
        
             | jfengel wrote:
             | It feels really weird to say that Roger Penrose is being
             | dumb.
             | 
             | It's accurate. But it feels really weird.
             | 
             | It's not uncommon for great scientists to be totally out of
             | their depth even in nearby fields, and not realize it. But
             | this isn't the hard part of either computability or
             | philosophy of mind.
        
         | northern-lights wrote:
         | > there is a function capable of transforming information into
         | "thinked information", or what we usually call reasoning. We
         | know that function exists, because we ourselves are an example
         | of such function.
         | 
         | We mistakenly assume, they are true because perhaps we want
         | them to be true. But we have no proof that either of these are
         | true.
        
         | bwoj wrote:
         | It is a big mistake to think that most computability theory
         | applies to AI, including Godel's Theorem. People start off
         | wrong by talking about AI "algorithms." The term applies more
         | correctly to concepts like gradient descent. But the inferences
         | of the resulting neural nets is not an algorithm. It is not a
         | defined sequence of operations that produces a defined result.
         | It is better described as a heuristic, a procedure that
         | approximates a correct result but provides no mathematical
         | guarantees.
         | 
         | Other aspects of ANN that show that Godel doesn't apply is that
         | they are not formal systems. Formal system is a collection of
         | defined operations. The building blocks of ANN could perhaps be
         | built into a formal system. Petri nets have been demonstrated
         | to be computationally equivalent to Turing machines. But this
         | is really an indictment on the implementation. It's the same as
         | using your PC, implementing a formal system like its
         | instruction set to run a heuristic computation. Formal system
         | can implement informal systems.
         | 
         | I don't think you have to look at humans very hard to see that
         | humans don't implement any kind of formal system and are not
         | equivalent to Turing machines.
        
         | btilly wrote:
         | With sufficient compute capacity, a complete physical
         | simulation of a human should be possible. This means that, even
         | though we are fallible, there is nothing that we do which can't
         | be simulated on a Turing machine.
        
           | chromanoid wrote:
           | May still only yield a philosophical zombie. You can simulate
           | gravity but never move something with its simulation.
        
         | skywhopper wrote:
         | Not every fact is computable. We are not Turing machines.
        
         | cowl wrote:
         | he starts with "consciousnes is not computable". You can not
         | just ignore it as a central argument withouth explaining why
         | your preference to think on it as basic computability theory
         | makes more sence than his.
         | 
         | What's more, whatever you like to call the transoforming of
         | information into thinked information by definition can not be a
         | (mathematical) function, because it would require all people to
         | process the same information in the same way and this is
         | plainly false
        
       | irickt wrote:
       | Daniel Dennett thoroughly debunks Penrose' argument in Chapter 15
       | of Darwin's Dangerous Idea. Quoting reviewers of a Penrose paper
       | ... "quite fallacious," "wrong," "lethal flaw" and "inexplicable
       | mistake," "invalid," "deeply flawed." "The Al community [of 1995]
       | was, not surprisingly, united in its dismissal of Penrose's
       | argument."
        
       | lowbloodsugar wrote:
       | If an elderly but distinguished scientist says that something is
       | possible, he is almost certainly right; but if he says that it is
       | impossible, he is very probably wrong.
       | 
       | - Arthur C Clarke
        
       | James_K wrote:
       | > The interviewer is barely treading water in the ocean of
       | Penrose's thought. He mistakes his spasmodic thrashing for
       | swimming.
       | 
       | The comments below this video are utterly insane. Roger Penrose
       | seems to have a fanatical cult attached to him.
        
       | whatshisface wrote:
       | If anyone thinks the human mind is computable, tell me the
       | location of even one particle.
        
         | PeterWhittaker wrote:
         | OK, try this for size, bearing in mind that it is a heuristic
         | argument.
         | 
         | No one can "know", with certainty, the location of any
         | particle. Or, to be slightly more accurate, the more we know of
         | its location, the less we know of its movement. This is
         | essentially Heisenberg/QM 101.
         | 
         | But we see the results of "computation" all around us, all the
         | time: Any time a chemical or physical reaction settles to an
         | observable result, whether observed by one of us, that is, a
         | human, or another physical entity, like a tree, a squirrel, a
         | star, etc. This is essentially a combination of Rovelli's
         | Relational QM and the viewing of QM through an information
         | centric lens.
         | 
         | In other words, we can and do have solid reality at a macro
         | level without ever having detailed knowledge (whatever that
         | might mean) at a micro/nano/femto level.
         | 
         | Having said that, I read your comment as implying that "the
         | human mind" (in quotes because that is not a well defined
         | concept, at least not herein; if we can agree on an operational
         | definition, we may be able to go quite far) is somehow
         | disconnected from physical reality, that is, that you are
         | suggesting a dualist position, in which we have physics and
         | physical chemistry and everything we get from them, e.g.,
         | genetics, neurophysiology, etc., all based ultimately on QM,
         | and we have "consciousness" or "the mind" as somehow being
         | outside/above all of that.
         | 
         | I have no problem with that suggestion. I don't buy it, and am
         | mostly a reductionist at heart, so to speak, but I have no
         | problem with it.
         | 
         | What I'd like to see in support of that position would be
         | repeatable, testable statements as to how this "outside/above"
         | "thing" somehow interacts with the physical substrate of our
         | biological lives.
         | 
         | Preferably without reference to the numinous, the ephemeral, or
         | the magical.
         | 
         | Honestly, I really would like to see this. It would represent
         | one of the greatest advances in knowledge in human history.
        
       | like_any_other wrote:
       | I think all the debunkings of Penrose's argument are rather
       | overcomplicated, when there is a much simpler flaw:
       | 
       | Which operation can computers (including quantum computers) not
       | perform, that human neurons can? If there is no such operation,
       | then a human-brain-equivalent computer can be built.
        
       | overu589 wrote:
       | I complement Penrose for his indifference to haters and harsh
       | skeptics.
       | 
       | Our minds and consciousness do not fundamentally use linear logic
       | to arrive at their conclusions, they use constructive and
       | destructive interference. Linear logic is simulated upon this
       | more primitive (and arguably superior) cognition.
       | 
       | It is true that any outcome of any process may be modeled in
       | serialized terms or computational postulations, this is different
       | than the interference feedback loop used by intelligent human
       | consciousness.
       | 
       | Constructive and destructive interference is different and
       | ultimately superior to linear logic on many levels. Despite this,
       | the scalability of artificial systems may very well easily
       | surpass human capabilities on any given task. There may be an
       | arguable energy efficiency angle.
       | 
       | Constructive/destructive interference builds holographic
       | renderings which work sufficiently when lacking information. A
       | linear logic system would simulate the missing detail from
       | learned patterns.
       | 
       | Constructive/destructive interference does not require intensive
       | computation
       | 
       | An additive / reduction strategy may change the terms of a
       | dilemma to support a compromised (or alternatively superior)
       | "human" outcome which a logic system simply could not "get" until
       | after training.
       | 
       | There is more, though these are a worthy start.
       | 
       | And consciousness is the inflection (feedback reverberation if
       | you like) upon the potential of existential being (some animate
       | matter in one's brain). The existential Universe (some part of
       | matter bound in the neuron, those micro-tubes perhaps) is
       | perturbed by your neural firings. The quantum domain is an echo
       | chamber. Your perspectives are not arranged states, they are
       | potentials interfering.
       | 
       | Also, "you all" get intelligence and "will" wrong. I'll pick that
       | fight on another day.
        
       | Chance-Device wrote:
       | I swear this was on the front page 2 minutes ago and now it's
       | halfway down page 2.
       | 
       | Anyway, I'm not really sure where Penrose is going with this. As
       | a summary, incompleteness theorem is basically a mathematical
       | reformulation of the paradox of the liar - let's state this here
       | for simplicity as "This statement is a lie" which is a bit easier
       | than talking about " All Cretans are liars", which is the way I
       | first heard it.
       | 
       | So what's the truth value of "This statement is a lie"? It
       | doesn't have one. If it's false, then it's true. But if it's
       | true, then it must be false. The reason for this paradox is that
       | it's a self-referential statement: it refers to its own truth
       | value in the construction of its own truth value, so it never
       | actually gets constructed in the first place.
       | 
       | You can formulate the same sort of idea mathematically using
       | sets, which is what Godel did.
       | 
       | Now, the thing about this is that as far as I am aware (and I'm
       | open to be corrected on this) this never actually happens in
       | reality in any physical system. It seems to be an artefact of
       | symbolic representation. We can construct a series of symbols
       | that reference themselves in this way, but not an actual system.
       | This is much the same way as I can write "5 + 5 = 11" but it
       | doesn't actually mean anything physically.
       | 
       | The closest thing we might get to would be something that
       | oscillates between two states.
       | 
       | We also ourselves, don't have a good answer to this problem as
       | phrased. What is the truth value of "This statement is a lie"? I
       | have to say "I don't know" or "there isn't one" which is a bit
       | like cheating. Am I incapable of consciousness as a result? And
       | if I am indeed conscious instead because I _can_ make such a
       | statement instead of simply "True" or "False", well I'm sure that
       | an AI can be made to do likewise.
       | 
       | So I really don't think this has anything to do with
       | intelligence, or consciousness, or any limits on AI.
        
         | dang wrote:
         | > I swear this was on the front page 2 minutes ago and now it's
         | halfway down page 2.
         | 
         | It set off the flamewar detector. I've turned that off now.
        
           | Chance-Device wrote:
           | Thanks!
        
         | funktour wrote:
         | (for the record, I think the Penrose take on Godel and
         | consciousness is mostly silly and or confused)
         | 
         | I think your understanding of the incompleteness theorem is a
         | little, well, incomplete. The _proof_ of the theorem does
         | involve, essentially, figuring out how to write down  "this
         | statement is not provable" and using liar-paradox-type-
         | reasoning to show that it is neither provable nor disprovable.
         | 
         | But the incompleteness theorem itself is not the liar paradox.
         | Rather, it shows that any (consistent) system rich enough to
         | express arithmetic cannot prove or disprove all statements.
         | There are things in the gaps. Godel's proof gives one example
         | ("this statement is not provable") but there are others of very
         | different flavors. The standard one is consistency (e.g. Peano
         | arithemtic alone cannot prove the consistency of Peano
         | arithmetic, you need more, like much stronger induction; ZFC
         | cannot prove the consistency of ZFC, you need more, like a
         | large cardinal).
         | 
         | And this very much _does_ come up for real systems, in the
         | following way. If we could prove or disprove each statement in
         | PA, then we could also solve the halting problem! For the same
         | reason there 's no general way to tell whether each statement
         | of PA has a proof, there's no general way to tell whether each
         | program will halt on a given input.
        
       ___________________________________________________________________
       (page generated 2025-03-02 23:00 UTC)