[HN Gopher] Rice's Theorem: An interactive tutorial
___________________________________________________________________
Rice's Theorem: An interactive tutorial
Author : pgayed
Score : 123 points
Date : 2022-11-25 13:49 UTC (9 hours ago)
(HTM) web link (busy-beavers.tigyog.app)
(TXT) w3m dump (busy-beavers.tigyog.app)
| DarkmSparks wrote:
| Detecting infinite loops (halting) isn't the impossible problem
| turing "proved" from my experience.
|
| Its just a lim/tends to problem.
|
| Which depends entirely on how you define infinity.
| tgv wrote:
| It's not possible to write an algorithm that will detect
| infinite loops in all programs. But of course there are
| instances that can be detected, such as for
| () {}
| wizeman wrote:
| > It's not possible to write an algorithm that will detect
| infinite loops in all programs.
|
| Actually, it is possible, if the program is not allowed to
| consume infinite memory.
|
| That's what these algorithms do (where "value" is the state
| of the finite-state machine):
|
| https://en.wikipedia.org/wiki/Cycle_detection
| DarkmSparks wrote:
| As I say, from my experience that depends how and if you
| define infinitely.
|
| If infinite is formally defined then detecting if any
| function equals or tends to it is fairly trivial.
|
| quick and useful defs are "can exceed a time limit", or "can
| exceed the error bounds of floating point math"
|
| I'd guess you can do it formally by catching the 6 or 7
| definitions of infinite, but I've never come close to needing
| to go that far, for most, just programatically constructing a
| graph of the logic and detecting infinite loops in it has
| proved more than sufficient.
| UncleMeat wrote:
| Good news. Turing defined a nonterminating Turing Machine
| in his paper so we don't need to worry about some weird
| nonstandard definition of nontermination.
|
| "Oh, computers have finite memory so actually they are
| finite state machines" is just needless pedantry that is
| not useful for the actual mathematical relevance of
| Turing's proof and its consequences. Yes, the field of
| static analysis makes practical (though not flawless)
| solutions to undecidable problems every day. This is not
| relevant here.
| DarkmSparks wrote:
| nope, nothing to do with finite memory or accuracy. Thats
| just an easy/practical way to define it if you have a
| halting problem to solve. Everything to do with https://e
| n.m.wikipedia.org/wiki/1/2_%2B_1/4_%2B_1/8_%2B_1/16...
| halting exactly at infinity (and knowing it without
| summing infinitely).
| CaptainNegative wrote:
| If one is willing to be that vague in drawing
| connections, one can reduce nearly every problem to any
| other one. But Turing was a mathematician, and very
| precise with his definitions (albeit not quite as
| pedantic as some successors such as Aho/Ullman decades
| later).
|
| The interesting bit about Turing's result is that it can
| be adapted to a wide class of mathematical and real
| objects. Physical systems? Yep. Time and/or space bounded
| machines? You bet. Actually, at its core Turing's result
| is itself an adaptation of Godel's, who more broadly
| talks about the context of first-order mathematical
| theories.
|
| If one want to bring these concepts to practice (where by
| practice I mean the realm of proving results in
| theoretical cs), it's a lot more effort than comparing
| claims to a simple sequence with an easy to derive limit.
| For an introductory example of the subtlety involved,
| while the two series (1/2 + 1/4 + 1/8 + ...) and (2 - 1/2
| - 1/4 - 1/8 - ...) both approach 1 in opposite
| directions, most uncomputable numbers cannot be
| approached from both sides simultaneously (by a
| computably enumerable process, anyway). For example,
| while Chaitin's number is a definite number, you cannot
| write down _any_ program whose sequence of outputs
| converges to it from above. They 're all wrong. Comparing
| that to probably the most basic convergent infinite
| series in mathematics (that doesn't end in all zeros) is
| quite a disservice to the complexity and beauty of the
| field.
| DarkmSparks wrote:
| Or, completely ignoring Zeno's paradox leaves you to
| believe halting problems are all impossible to solve when
| in practice they are generally trivial.
|
| But nice deflect, no idea why this particular topic is so
| rife with fud, but you keep sticking with your
| deadlocking programs, and I'll stick with instant reports
| that my code will not halt and we can both go our
| separate ways. thnx.
| UncleMeat wrote:
| Again, if you just wildly make up terminology that nobody
| else uses you'll come to different conclusions than other
| people - but you won't be talking about the same thing.
| insanitybit wrote:
| damn this Turing guy sounds like he really knew what he
| was talking about
| maweki wrote:
| The great thing about undecidable problems is, that you
| can put an arbitrary amount of research in and you'll
| probably find another subclass of instances for which the
| problem is indeed decidable.
|
| Only an infinite number of subclasses of instances to go.
|
| That's proven job safety for computer science
| researchers.
| wizeman wrote:
| > Only an infinite number of subclasses of instances to
| go.
|
| There's only _one_ subclass of instances for which the
| problem is undecidable: those where the program being
| analyzed is allowed to consume infinite memory.
|
| Which is not the same thing as saying that the program is
| implemented in a Turing-complete language.
|
| Because you can write a program in a Turing-complete
| language and yet, run it on a machine with finite memory
| (i.e. a computer).
| wizeman wrote:
| I'm sorry, I've only gone through the first few steps of the
| tutorial but the initial claims have already bothered me a lot. I
| have to be pedantic because this is the biggest pet peeve of
| mine:
|
| > Turing famously showed that computers can't decide whether your
| code halts.
|
| I would say this is actually false, even though almost everybody
| thinks it's true. At least, if you're interested in actually
| solving problems rather than being interested in what is mostly a
| mathematical/theoretical curiosity.
|
| The Halting Problem is famously only undecidable if the code
| you're analyzing is being modeled on a "Turing machine" or
| something equally "powerful" to it. The problem is, computers are
| not as powerful as Turing machines, and furthermore, Turing
| machines (or something equivalent to them) neither exist nor can
| _ever_ exist in practice, because it 's impossible to build a
| "machine" with truly infinite memory.
|
| In fact, if the machine where the code is running has finite
| memory, then the Halting problem is actually very easily solved
| (well, in theory). And there already exists various known
| algorithms that detect cycles of values (where the "value" in
| this context is the state of the program):
|
| https://en.wikipedia.org/wiki/Cycle_detection
|
| As you can see, if you use one of these cycle detection
| algorithms, then the Halting Problem is _not_ undecidable. It 's
| easily solvable. The problem is that, while these known
| algorithms are able to solve the problem in constant memory,
| there are no known algorithms to solve it in a time-efficient way
| yet. But again, it is decidable, not undecidable.
|
| Now, I understand how Turing machines as a model can be useful,
| as the simplifying assumption of never having to worry about a
| memory limit might allow you to more easily do types of analysis
| that could be more difficult to do otherwise. And sure, the way
| the Halting Problem was "solved" was very clever (and I actually
| suspect that's a major reason why it has reached its current
| legendary status).
|
| But this simplifying assumption should have never allowed a
| result like the Halting Problem which is known to be wrong for
| the type of computing that we actually care about to be elevated
| to the status that it currently has, and by that I mean, to
| mislead everyone into thinking that the result applies to actual
| computers.
|
| So again, I would say, be careful whenever you see a Turing
| machine being mentioned. I would say it would be better to view
| them as mostly a mathematical/theoretical curiosity. And yes,
| they can also be a powerful and useful tool _IF_ you understand
| its limitations. But you should never confuse this model for what
| actually is the real world.
|
| > Henry Rice proved a much more devastating result: "computers
| can't decide anything interesting about your code's input-
| output!"
|
| Since Rice's theorem (and many other computer science results!)
| depends on the Halting Problem being true, while it is actually
| false for real-world machines, my understanding is that Rice's
| theorem is also false for real-world machines.
|
| But Rice's theorem (and sentences like the article is using) is
| absolutely another example of a huge number of cases where the
| Halting Problem has seemingly mislead almost everyone into
| thinking that something can't be solved when it actually can be
| solved.
|
| I wonder how much more research could have advanced (like for
| example, in SMT solvers, formal methods, etc) if everyone
| actually believed that it's possible for an algorithm to decide
| whether your program has a bug in it.
|
| How many good people were discouraged into going into this area
| of research because of this widespread belief that this pursuit
| is proven to be unachievable in the general case?
|
| Now, I'm not a computer scientist nor a mathematician or
| logician. So I could be wrong. If so, I would be interested in
| knowing how I'm wrong, as I've never seen this issue being
| discussed in a thorough way.
|
| Edit: another (according to my understanding) completely
| misleading phrase in the tutorial exemplifying what I'm
| describing:
|
| > we learned Turing's proof that halts is actually impossible to
| implement: any implementation you write will have a bug!
|
| Now, to be clear I'm not blaming the author of the tutorial, as
| this is just one very tiny example of what actually is an
| extremely widespread belief across the whole industry.
| markusde wrote:
| While technically correct in several ways, this is a pretty
| naive take on the topic. A simple counterexample is trying to
| decide whether a program which computes the Collatz conjecture
| will always halt (which is equivalent to proving the conjecture
| itself). Proving it always halts (or crashes) in some fixed
| amount of memory is insufficient for saying anything meaningful
| about the program: we oftentimes care about the mathematical
| object a computer is modelling, not just the computer itself.
|
| Furthermore, it is true that computers have a finite number of
| states, but this number is intractable for most problems in
| verification. There are several well known cryptographic
| problems that take on the order of the lifetime of the universe
| to solve in bounded memory... if you're trying to write a
| program to verify one of these algorithms the boundedness of
| memory will probably not help you much.
| wizeman wrote:
| > Proving it always halts (or crashes) in some fixed amount
| of memory is insufficient for saying anything meaningful
| about the program: we oftentimes care about the mathematical
| object a computer is modelling, not just the computer itself.
|
| I'd argue the complete opposite. It's very meaningful for me
| to distinguish whether a program works correctly, crashes due
| to running out of memory or loops forever.
|
| In fact, this would be a huge, huge deal if you could solve
| it in an efficient way.
|
| > There are several well known cryptographic problems that
| take on the order of the lifetime of the universe to solve in
| bounded memory... if you're trying to write a program to
| verify one of these algorithms the boundedness of memory will
| probably not help you much.
|
| I think you missed one of my points:
|
| > while these known algorithms are able to solve the problem
| in constant memory, there are no known algorithms to solve it
| in a time-efficient way yet
|
| _Yet_ being the key word here. As far as I know it 's not
| proven that it's impossible to build a time-efficient
| algorithm, only that (as far as I understand) it's unlikely
| unless P=NP. But we don't know whether P=NP.
|
| And there's another issue here as well: I'm not even sure if
| P=NP actually has any meaning in this context, because I
| assume issues related to P=NP are usually analyzed in the
| context of Turing machines.
|
| This actually ties into the other point I mentioned:
|
| > How many good people were discouraged into going into this
| area of research because of this widespread belief that this
| pursuit is proven to be unachievable in the general case?
| kaba0 wrote:
| Well, I may be writing stupid things in the following, but
| have a look at the Busy Beaver problem. Even for tiny tiny
| state sizes it will grow faster than any computable
| function -- so for a sufficiently complex property you
| would effectively have to iterate over this problem space.
| No matter if even P=NP, that is just.. literally bigger
| than any function you could make up.
| wizeman wrote:
| > so for a sufficiently complex property you would
| effectively have to iterate over this problem space.
|
| Why?
|
| Why are you assuming that the only way to figure out if a
| program halts is by simulating running it step by step?
|
| And not by analyzing the description of the program?
| kaba0 wrote:
| You can analyze it, and for that you would use some
| system like logic. But it so happens that these pesky
| mathematicians are everywhere, because any sufficiently
| complex system will fall victim of Godel. And in fact,
| Math itself is limited and trivially overstepped by
| Turing machines, but you are free to create any other
| reasoning system, it will also fall short of Turing.
|
| So, will you believe that these people (I believe Godel
| was praised highly by even Neumann!) weren't talking out
| of their asses? :D
| wizeman wrote:
| Godel proved things about the limits of math and logic.
|
| What does that have to do with how long a specific
| algorithm takes to run?
| kaba0 wrote:
| Well, if not by running them and not by analyzing them
| (any rule system, not just logic is affected), what is
| your proposed.. whatever to say whether they halt or not?
| Or any other non-trivial, semantic property?
| wizeman wrote:
| Sorry, I think I had misinterpreted your point.
|
| So you're saying that there are only 2 options:
|
| 1. You either simulate running the program step-by-step
| (which has an extreme time complexity problem).
|
| 2. Or, you try to analyze it within some system of logic.
| But if you do so, then there are things that you can't
| prove because logic itself can't prove certain
| statements.
|
| This is an interesting argument. But I think point 1 sort
| of disproves point 2.
|
| Point 1 means that you can always analyze such programs,
| it's just that it might take a really long time to
| achieve that.
|
| So I think point 2 is not valid in this specific context.
| Because you can always prove whatever you want (given
| that when you simulate running a program, you are
| necessarily doing it within a system of logic), it's just
| that we don't know how long that could take in theory (of
| course, in practice, it's _currently_ intractable).
|
| So if you can always prove whatever you want, that means
| Godel's results about the undecidability of certain
| statements don't apply when analyzing finite-state
| computation systems, only infinite ones.
| markusde wrote:
| > I'd argue the complete opposite. It's very meaningful for
| me to distinguish whether a program works correctly,
| crashes due to running out of memory or loops forever.
|
| Fair for many applications. I work with computers that
| write proofs.
|
| > Yet being the key word here. As far as I know it's not
| proven that it's impossible to build a time-efficient
| algorithm, only that (as far as I understand) it's unlikely
| unless P=NP. But there's another issue here as well: we
| don't know whether P=NP.
|
| There are worse complexity classes than NP, for example
| EXPTIME, which we know are not P by the time hierarchy
| theorem.
|
| > How many good people were discouraged into going into
| this area of research because of this widespread belief
| that this pursuit is proven to be unachievable in the
| general case?
|
| Well, not me! Formal methods and program synthesis are
| alive and well, but it's important to researchers in these
| areas to understand what parts of our work are decidable
| and what aren't. I should mention that Writing efficient
| SMT solvers is an important aspect of this even when the
| theories are undecidable, and tailoring tools to real user
| patterns can really blunt the problem of undecidability in
| practical use cases.
| wizeman wrote:
| > There are worse complexity classes than NP, for example
| EXPTIME, which we know are not P by the time hierarchy
| theorem.
|
| I mentioned this in another comment: isn't EXPTIME only
| defined for Turing machines?
|
| Which again, just touches on all of my points.
|
| For all we know, if you've reached the conclusion that an
| algorithm's complexity is EXPTIME, this conclusion could
| be just as false as the Halting problem and Rice's
| theorem are for real-world computers.
|
| > Writing efficient SMT solvers is an important aspect of
| this even when the theories are undecidable,
|
| My point is that you shouldn't believe that the theories
| are undecidable because those analysis were done using
| Turing machines as models.
|
| And the other point is that sure, there are many people
| doing formal methods, etc. But how many more would there
| be if most people believed the problem to be solvable?
| markusde wrote:
| I think I see what you're getting at. A few final points:
|
| - There's a subtle difference between "infinite" and
| "unbounded". Turing machines use at most a finite amount
| of memory after any finite amount of steps (the tape
| always ends in an infinite sequence of empty cells), so a
| machine which starts with one cell and allocates one new
| cell every step is equivalent to a TM.
|
| - Unbounded memory is always assumed for analyzing
| complexity. If not, all algorithms on all computers are
| O(1)... which is more or less the line of inquiry you
| seem to be going down. All algorithms being O(1) is not a
| useful measure of complexity.
|
| - We can model finite but unbounded memory IRL: if a
| program is allowed to allocate new machines on a network
| (and also pause until that new machine is online) the
| amount of memory can grow dynamically to the scale of the
| universe.
|
| - Undecidability is defined on Turing machines, so we
| definitely can say that some theories are undecidable. No
| stronger method of computation which can also be
| physically implemented is known to exist.
|
| edit: typo
| wizeman wrote:
| > - Unbounded memory is always assumed for analyzing
| complexity. If not, all algorithms on all computers are
| O(1)... which is more or less the line of inquiry you
| seem to be going down. All algorithms being O(1) is not a
| useful measure of complexity.
|
| Sure, but saying a problem is undecidable when it is in
| fact decidable is also not a useful measure of
| decidability to me.
|
| So I'm not 100% sure that those measures of complexity
| have the same real-world meaning, for similar reasons.
|
| > - We can model finite but unbounded memory IRL: if a
| program is allowed to allocate new machines on a network
| (and also pause until that new machine is online) the
| amount of memory can grow dynamically to the scale of the
| universe.
|
| Yes, but even in that case, the amount of memory would be
| finite. Because the universe is expanding faster than the
| speed of light, there's only a finite amount of energy
| that is accessible at any point of spacetime (as far as I
| understand).
|
| So even in theory you can't say the amount of memory can
| scale infinitely (not to mention in practice).
|
| > - Undecidability is defined on Turing machines, so we
| definitely can say that some theories are undecidable.
|
| The theories are undecidable only if you assume a model
| of computation with infinite memory. Which can be
| interesting from the mathematical point of view, but not
| for applying those results to real-world problems,
| because now you've made a grave mistake: you believe
| something is undecidable when it is in fact decidable.
|
| I mean, sure, I can also invent a new super-duper-amazing
| model of computation that is even more powerful than
| Turing machines and can perform incredible Oracle-like
| feats and, say, run all programs in constant-time
| (yeah!), but it would not be correct to say that all the
| results obtained in this model apply to the real world.
|
| If an SMT solver is trying to prove that my program has a
| bug, even though some theory says that the problem is
| undecidable if my program was running on a theoretical
| Turing machine, this is meaningless to me (and quite
| misleading) because the problem is actually decidable
| given that my program is going to run on a computer.
|
| > No stronger method of computation which can also be
| physically implemented is known to exist.
|
| You say that like Turing machines can be physically
| implemented.
| crdrost wrote:
| Caveat, the thing that makes it decidable is not the fact
| that it was going to run on a computer with finite
| memory, the SMT solver _will_ fail on many code chunks
| that can run on your computer. (And that 's more or less
| the claim of the halting theorem, not that nothing useful
| can be done, but that anything done can only be
| heuristic.)
|
| What made it decidable was rather the fact that you did
| not use all of the power that your programming language
| makes available to you, you did not steer the program
| last the bounds where the heuristics work.
|
| This kind of becomes a little more obvious when you port
| Turing's proof of the halting theorem to a "normal"
| programming language. The thing you immediately notice is
| a type error, "does the analyzer halt when given itself
| as an input?"... well, it doesn't know what to do, it was
| expecting an argument of type `ParseTree x` for some x
| and you handed it an argument of type `[?]x. ParseTree x
| - Bool`...
|
| To get around this and build the liar paradox which makes
| the proof "go," is not too hard: but you are basically
| forced to produce a quine instead and those techniques
| are non-trivial. In our normal programming lives, you
| could apply a heuristic that assumes we never do such
| things, and life is good.
|
| The focus on finite versus infinite memory is a huge
| distraction. If your halting detector applied to the
| above quine tried to not be a heuristic by salvaging the
| case where it runs out of memory while analyzing, you
| would get the result "this program will OOM, but if you
| run me in a machine with 1 meg _less_ memory I will stop
| an odd number of cycles earlier and the program will
| _not_ OOM, but will instead halt successfully telling you
| that the next cycle would OOM." Suddenly your tests that
| you build with this thing are all flaky, because it didn
| 't have the humility to be a heuristic and say "I don't
| know this is too complex for me," and in the real world
| you are running on Linux and you don't know how many megs
| you really have available. Better just to not play any of
| that game.
|
| So yeah the problem with Turing machines is not that they
| have infinite memory, infinite memory is in fact a
| standard assumption that most programmers in GCed
| languages make when programming most of their programs.
| This is why "your problem isn't the garbage collector,
| it's that you're swimming in garbage" and all that.
|
| Put another way, Turing machines suck even if you enhance
| them to have 5 parallel tapes (God why do they not have
| parallel tapes by default, should have at least one for
| output, one for main memory, one for a stack, and some
| more as scratchpads) of bytes (seriously why are they
| still not bytes in textbooks) and enforce that the tapes
| each only have 1 billion symbols and any attempt to go
| off the edge crashes the Turing machine. You are joyous
| because it now only has finite memory, I can run a cycle
| detection with 10^{1 billion} nodes in the graph or
| whatever, so cycle detection is in principle solved! If
| you are as pragmatic as you say you are, you see that
| this is no help, halting is still effectively impossible.
| wizeman wrote:
| > Caveat, the thing that makes it decidable is not the
| fact that it was going to run on a computer with finite
| memory, the SMT solver will fail on many code chunks that
| can run on your computer.
|
| And what does that say about the problem, that current
| SMT solvers fail?
|
| It only implies that it's a difficult problem to solve
| (i.e. that the time complexity may be large, possibly),
| but it doesn't prove that it necessarily has to be that
| way.
|
| > (And that's more or less the claim of the halting
| theorem, not that nothing useful can be done, but that
| anything done can only be heuristic.)
|
| I don't think the Halting problem proves anything about
| the complexity of the problem or whether you can only
| apply heuristics and not solve the problem in a
| principled way. In fact, cycle detection algorithms work
| deterministically, without any use of heuristics.
|
| The proof of the Halting problem only works because in
| Turing machines, programs are allowed to consume memory
| infinitely without ever running into a cycle. If you
| don't allow that, then the Halting theorem doesn't say
| anything.
|
| > This kind of becomes a little more obvious when you
| port Turing's proof of the halting theorem to a "normal"
| programming language
|
| You mean, Turing-complete languages. In which case I
| agree, normal programming languages also make the
| simplifying assumption that the computer has infinite
| memory.
|
| Which is actually OK!
|
| That these programs are written in such languages doesn't
| prevent an algorithm from determining whether an
| arbitrary program written in such a language will halt or
| not, if it were to be executed in a machine with a finite
| amount of memory.
|
| > you would get the result "this program will OOM, but if
| you run me in a machine with 1 meg less memory I will
| stop an odd number of cycles earlier and the program will
| not OOM, but will instead halt successfully telling you
| that the next cycle would OOM."
|
| Yes, exactly!
|
| > Suddenly your tests that you build with this thing are
| all flaky, because it didn't have the humility to be a
| heuristic and say "I don't know this is too complex for
| me," and in the real world you are running on Linux and
| you don't know how many megs you really have available.
| Better just to not play any of that game.
|
| Actually, I would argue that it would be immensely useful
| to be able to run such a tool and that it would tell me:
|
| "Hey, your program will OOM even if you run it with 1 TB
| of memory!"
|
| Or even, say, 16 GB of memory.
|
| Of course, with this value being configurable. And also,
| take into consideration that you'd be able to know
| exactly why that happens.
|
| > So yeah the problem with Turing machines is not that
| they have infinite memory, infinite memory is in fact a
| standard assumption that most programmers in GCed
| languages make when programming most of their programs.
|
| Yes, and I see nothing wrong with that. Although I would
| like a tool like described above which would determine
| whether my program is buggy.
|
| > Put another way, Turing machines suck even if you
| enhance them to have 5 parallel tapes (God why do they
| not have parallel tapes by default, should have at least
| one for output, one for main memory, one for a stack, and
| some more as scratchpads) of bytes (seriously why are
| they still not bytes in textbooks) and enforce that the
| tapes each only have 1 billion symbols and any attempt to
| go off the edge crashes the Turing machine
|
| Sure, but why would you do that? When analyzing cycle
| detection algorithms (which assume a finite amount of
| possible states) you don't do any of that crap :)
|
| > You are joyous because it now only has finite memory, I
| can run a cycle detection with 10^{1 billion} nodes in
| the graph or whatever, so cycle detection is in principle
| solved! If you are as pragmatic as you say you are, you
| see that this is no help, halting is still effectively
| impossible.
|
| Yes, it's solved! Awesome!
|
| Not let's focus on how to reduce the time complexity,
| which is the _actual_ important problem!
|
| And don't assume that cycle detection needs to run a
| program step by step through all of the states of the
| program.
| Khoth wrote:
| > Put another way, Turing machines suck even if you
| enhance them to have 5 parallel tapes (God why do they
| not have parallel tapes by default, should have at least
| one for output, one for main memory, one for a stack, and
| some more as scratchpads)
|
| You answered your question as you asked it. If you juice
| up your Turing machine like that, it will still suck so
| much that nobody will actually write programs for it, but
| it will be more complicated to describe and analyse.
| kaba0 wrote:
| I think he/she was just "sarcasticly" extending Turing
| machines to show that even in practicality its limits
| apply.
|
| But for those who may not know, you can emulate an n-tape
| Turing machine with a single-tape one, they are
| computationally equivalent (and in fact, there is no
| stronger model of computability)
| wizeman wrote:
| > You answered your question as you asked it. If you
| juice up your Turing machine like that, it will still
| suck so much that nobody will actually write programs for
| it, but it will be more complicated to describe and
| analyse.
|
| Of course. Why would you do that? You don't have to.
|
| You can write a program in a Turing-complete language and
| yet analyze how it will run on a machine with a finite
| amount of memory.
|
| For example, when you analyze cycle detection algorithms,
| which assume that the program being analyzed runs on a
| machine with a finite amount of states (even if it is
| implemented in a Turing-complete language), you don't do
| any of that.
|
| And the currently-known algorithms are very simple to
| analyze.
| UncleMeat wrote:
| > I'd argue the complete opposite. It's very meaningful for
| me to distinguish whether a program works correctly,
| crashes due to running out of memory or loops forever.
|
| Great. And the entire field of static analysis is available
| for you to do this. But when discussing the formal
| consequences of the halting problem it is just completely
| pedantic to show up and point out that actually you can run
| Coverity and it'll spit out some pretty useful results on
| most programs.
| wizeman wrote:
| > Great. And the entire field of static analysis is
| available for you to do this. But when discussing the
| formal consequences of the halting problem it is just
| completely pedantic to show up and point out that
| actually you can run Coverity and it'll spit out some
| pretty useful results on most programs.
|
| I'm glad that my points were so clear and obvious to you!
| mannykannot wrote:
| > This actually ties into the other point I mentioned:
|
| >> How many good people were discouraged into going into
| this area of research because of this widespread belief
| that this pursuit is proven to be unachievable in the
| general case?
|
| Few if any, as those who are good enough have no difficulty
| understanding _both_ the seminal importance of Turing 's
| work _and_ the realities of actual computing devices.
| wizeman wrote:
| I agree a bit with your point, but still, given how
| little it is usually emphasized that the Halting problem
| is not valid for finite-state machines I'm not sure that
| even very smart people don't mistakenly end up believing
| that it is.
| mannykannot wrote:
| You appear to have a low opinion of other people's
| cognitive abilities. In that light, perhaps you should
| consider the possibility that your position, which
| strenuously attempts to imply that there is a serious
| cognitive dissonance here, could complicate understanding
| these matters for someone who is approaching them for the
| first time.
| lisper wrote:
| > How many good people were discouraged into going into this
| area of research because of this widespread belief that this
| pursuit is proven to be unachievable in the general case?
|
| You need to read this book:
|
| https://www.amazon.com/Quantum-Computing-since-Democritus-Aa...
|
| (The title notwithstanding, this book is about a lot more than
| just quantum computing. In particular, it is about
| computational complexity theory, which is exactly where your
| line of inquiry leads. It is a very productive area of
| research.)
| gernb wrote:
| AFAIK cycle detection does not tell you if a program will halt,
| only that a program has a cycle.
| while(random() != 0) { wait(1second); }
|
| will this halt?
|
| You can use cycle detection to reject programs that have cycles
| as "may halt" but not "will halt"
| rocqua wrote:
| Turing machines do not have a true random(). In general the
| space of deterministic and probabilistic programs are very
| different.
|
| Now if you were to use a pseudo-random generator, and presume
| that this is the only function sampling from it, then all of
| a sudden the halting problem for this generator is well-
| defined. It also becomes a very simple question about the
| pseudo-random generator. If the random() function is truly
| random, then you are talking about a machine more powerful
| than a turing machine.
| gweinberg wrote:
| It is not a simple to answer question. If you want to know
| whether there is a binary string that hashes to a
| particular value, it may be that the only way to prove that
| one exists is to keep testing strings until you find one
| that hashes to the correct value. And if it turns out that
| no such string exists, that may be impossible to prove. You
| can in principle prove that no such string below some
| particular finite length exists, but it could well be
| impossible to prove that no such string of any length
| exists.
| umanwizard wrote:
| Actually all programs halt, because eventually a component in
| your computer will wear out and it will crash.
|
| But often mathematical models are more informative and
| interesting than the concrete physical objects they abstract.
| wizeman wrote:
| > But often mathematical models are more informative and
| interesting than the concrete physical objects they abstract.
|
| Yes, I agree.
|
| But a model of a machine with finite states is more
| informative and interesting for applying it to real-world
| computers than a model of a Turing machine.
|
| Why? Because the first model tells you the Halting problem is
| decidable and the second one tells you it's undecidable. And
| as I mentioned, for programs running on real-world computers
| the Halting problem _is decidable_.
| UncleMeat wrote:
| > But a model of a machine with finite states is more
| informative and interesting for applying it to real-world
| computers than a model of a Turing machine.
|
| I don't agree _at all_. I do static analysis for my career.
| Trying to model the actual set of available memory
| configurations of a physical machine is going to scale way
| way way worse for most meaningful problems than approaching
| this the usual way.
| wizeman wrote:
| > I don't agree at all. I do static analysis for my
| career. Trying to model the actual set of available
| memory configurations of a physical machine is going to
| scale way way way worse for most meaningful problems than
| approaching this the usual way.
|
| I don't really understand your argument.
|
| I didn't say anything about how you need to solve
| problems like those described by the Halting problem and
| Rice's theorem for finite-state machines.
|
| So I'm not constraining how you would have to solve this
| issue. You can model the memory configuration if you
| want. But you don't have to model it in your algorithm.
| It's just that your algorithm is allowed to assume that
| the amount of memory available for the program being
| analyzed is finite (if it wants to).
|
| But anyway, the complexity / efficiency of an algorithm
| that solves the problem is a different issue that than
| the decidability of the problem, the latter of which many
| people get it wrong.
| UncleMeat wrote:
| > So I'm not constraining how you would have to solve
| this issue. You can model the memory configuration if you
| want. But you don't have to model it in your algorithm.
| It's just that your algorithm is allowed to assume that
| the amount of memory available for the program being
| analyzed is finite (if it wants to).
|
| What I am saying is that in practice, for the huge
| majority of interesting problems, it is not useful to
| finitize the actual program but instead to abstract over
| the program that assumes no meaningful resource
| constraints. Wandering off down a road of looking for
| efficient algorithms that take advantage of the _actual_
| finite nature of real programs will only harm you in the
| enormous majority of cases.
|
| > But anyway, the complexity / efficiency of an algorithm
| that solves the problem is a different issue that than
| the decidability of the problem, the latter of which many
| people get it wrong.
|
| Again, people only "get it wrong" if you are being
| maximally pedantic in a way that provides negative value
| to thinking about these problems.
| isaacfrond wrote:
| Predicting whether a given program (Turing of otherwise) will
| halt after t steps, _where t is given in bits_ , is EXPTIME
| complete.
|
| By the time hierarchy theorem, class P (polynomial time
| algorithms) is a strict subset of EXPTIME.
|
| Accordingly, this proves that in your suggested finite machine,
| the halting program cannot be solved by a polynomial time
| algorithm.
| wizeman wrote:
| > EXPTIME complete
|
| Isn't EXPTIME completeness only defined for Turing machines?
|
| Which again, just proves all of my points.
|
| For all we know, if you've reached the conclusion that an
| algorithm is EXPTIME-complete, this conclusion could be just
| as false as the Halting problem and Rice's theorem are for
| real-world computers.
| danbruc wrote:
| I think you are misunderstanding the point. Deciding
| whether a finite Turing machine - a.k.a. a real computer -
| halts is EXPTIME-complete, i.e. it would already take
| forever to decide even if you had access to a real Turing
| machine but you actually only have access to less powerful
| real computers.
| evouga wrote:
| There's a rather big gap between "exponential time" and
| "forever"!
|
| I routinely solve instances of problems in EXPTIME to
| solve practical problems with no difficulty...
| wizeman wrote:
| > Deciding whether a finite Turing machine - a.k.a. a
| real computer - halts is EXPTIME-complete
|
| Can you point me somewhere that proves this exact
| statement you're making? Or at least, explains why it is
| true.
|
| I'm interested in reading about it.
|
| I mean, assuming you are correct (and I have no reason to
| believe you aren't) it still doesn't guarantee that the
| problem is intractable (as EXPTIME-complete algorithms
| can still run very quickly in practice), but yes, I think
| it would suggest that it is intractable (for real-word
| applications).
| danbruc wrote:
| Let's see, I just looked that up before. Let us for
| simplicity assume that in every step only one bit of
| state gets accessed, this will only be wrong by some
| constant factor. With that running a Turing machine for
| at most n steps will ensure that it does not use more
| than n bit of memory, therefore solving the halting
| problem for a machine with n bit is at least as hard as
| answering the question whether a Turing machine halts
| within n steps. This is in EXPTIME [1][2] given that the
| input n is encoded in log(n) bit. So that seems a lot
| less worse then at first sight, but on the other hand
| this uses a terrible bound for the number of steps.
|
| [1]
| https://cs.stackexchange.com/questions/112684/halting-
| proble...
|
| [2] https://en.wikipedia.org/wiki/EXPTIME
| wizeman wrote:
| > Let us for simplicity assume that in every step only
| one bit of state gets accessed, this will only be wrong
| by some constant factor. With that running a Turing
| machine for at most n steps will ensure that it does not
| use more than n bit of memory, therefore solving the
| halting problem for a machine with n bit is at least as
| hard as answering the question whether a Turing machine
| halts within n steps.
|
| I'm following so far.
|
| > This is in EXPTIME [1][2] given that the input n is
| encoded in log(n) bit.
|
| Perhaps I'm reading this incorrectly, but these proofs
| seem to assume that you're simulating the machine running
| step by step?
|
| Is it proven that the only way to solve the bounded
| Halting problem is by running the program step by step?
|
| If not, it seems that this is only proving a high
| complexity bound (i.e. that the problem is not more
| complex that EXPTIME), not a low bound (i.e. that the
| problem can be solved in less complexity than EXPTIME).
| swid wrote:
| Here is a good article you might be able to understand
| that pretty much provides an answer to your question.
|
| https://www.quantamagazine.org/the-busy-beaver-game-
| illumina...
| wizeman wrote:
| No, it does not provide an answer. What does that have to
| do with analyzing a finite-state machine in EXPTIME-
| complete time?
|
| The busy beaver problem, again, assumes it is running on
| a Turing machine with infinite memory.
|
| Don't confuse a program defined using a finite number of
| states with a machine that can only have a finite number
| of states. It's just another example of the points I'm
| making about how a theoretical mathematical problem is an
| interesting curiosity and might even be an interesting
| problem to solve but it does not necessarily apply to the
| real world. And how everyone seems to think that this
| mathematical problem applies to programs running in real-
| world computers.
|
| Someone else mentioned the busy beaver problem and I've
| given a more complete answer there:
|
| https://news.ycombinator.com/item?id=33744326
| swid wrote:
| Both people are me. I think there is an answer in that
| article, but I can't unpack it for you. It explains why
| what you are asking about is so impractical, and can
| point you to topics to gain a better understanding.
|
| In the real world, a program will either halt, loop, or
| crash, but this doesn't help us if a 6 instruction
| program can take longer than the age of the universe to
| run while still terminating. No matter how fast you go
| through its states. 64mb leaves a lot of room for unique
| states without a cycle.
| wizeman wrote:
| > In the real world, a program will either halt, loop, or
| crash, but this doesn't help us if a 6 instruction
| program can take longer than the age of the universe to
| run while still terminating. No matter how fast you go
| through its states. 64mb leaves a lot of room for unique
| states without a cycle.
|
| Why do you assume that you have to go through all the
| states of the program to determine whether it will run
| into a cycle?
| swid wrote:
| The foundation of cryptography is that there exists
| algorithms in which the only way the beat them is to
| iterate over a computationally infeasible number of
| states to be able to decrypt something.
|
| To not believe my assumption would require you to believe
| that such algorithms do not exist.
| wizeman wrote:
| Agreed.
|
| Since the assumption has not been proven, this is why the
| foundation of cryptography is still on a bit of a shaky
| ground, as none of those algorithms are proven to be
| unbeatable (the one-time pad being the exception,
| although it's not very useful in practice).
|
| It's probably also why weaknesses in cryptographic
| algorithms are still being discovered every now and then,
| unfortunately.
| kaba0 wrote:
| While you are right that our hardware with a finite amount of
| RAM is not a Turing-machine, you should not forget that it is
| usually not just a CPU and a RAM module with no side effects.
| Does writing to HDD extend your tape? Sending a post request on
| the internet and storing data on another computer, hell,
| controlling some physical machine pouring concrete into a hole?
| These all extend the same tape, effectively turning the whole
| universe into one. And while you might even go as far to claim
| that the universe itself is not infinite, and you would be
| right again! There are a finite amount of atoms, but I'm not
| really sure it is a good rebuke of the Halting problem that "we
| can brute force our way out of it".
|
| I think that's what makes computability theory interesting,
| especially if you add that our very Mathematics "break down" at
| a "trivially" small scale and that Turing machines will happily
| go on and on, literally being mathematically reasoned about,
| because we hit the edges of it sooner!
| wizeman wrote:
| > effectively turning the whole universe into one
|
| Not the whole universe, exactly. Just the parts that are
| possible to access :)
|
| > I'm not really sure it is a good rebuke of the Halting
| problem that "we can brute force our way out of it".
|
| Why do you think the only way to solve the Halting problem on
| a model of a machine with finite memory is to brute-force our
| way out of it?
|
| Another person mentioned the same thing, that it would take
| an exponential number of steps (in the number of bits in the
| state space) to simulate running a program.
|
| But why are you both assuming we can only solve the problem
| by simulating running the program and not by analyzing the
| program description?
| kaba0 wrote:
| Now you are just back at the Halting theorem, and thus
| Rice's theorem. The only question here is whether a
| computer is a Turing-machine, and frankly it is a weird
| hill to die on. With arbitrary side effects, you are back
| to "can the heat death of the universe be considered O(1)",
| which is, well an interesting thought, but that just
| demonstrates infinity's "size".
| wizeman wrote:
| > Now you are just back at the Halting theorem, and thus
| Rice's theorem.
|
| How so?
|
| We know the Halting theorem and Rice's theorem to be
| decidable for finite-state machines, so how does that
| prove anything about the time complexity of analyzing
| whether the program halts?
|
| > The only question here is whether a computer is a
| Turing-machine
|
| For me that's an absurd question, as it's impossible to
| build a machine as powerful as a Turing machine.
|
| > With arbitrary side effects, you are back to "can the
| heat death of the universe be considered O(1)", which is,
| well an interesting thought, but that just demonstrates
| infinity's "size".
|
| Sorry, I don't understand this argument :)
| kaba0 wrote:
| As mentioned by others, a Turing machine doesn't/can't
| use infinite amount of memory, even if all it does is to
| step right and write a 1, it will use n bits of memory in
| n steps, so I don't see how would our "finite" universe
| be a limiting factor here that would force our computers
| to be FSAs.
|
| My point regarding the heat death of the universe is that
| ad absurdum you can only use that amount of energy for
| your calculations, if every "Turing machine"-step takes a
| fixed amount of energy, which will indeed be a final
| number, and thus the O notation applies to that, making
| everything O(1), QED. But.. I don't see why would the
| Turing machine model break down as an analogy to existing
| computers, programs, hell, real life (as it is one way to
| represent everything computable).
| danbruc wrote:
| There are so many bits of state in a modern CPU, you are never
| going to solve the halting problem in practice even for a
| computer with a CPU only but no RAM or disk. And once you add
| some RAM or - haven forbid - a disk with a couple of terabyte
| of capacity, you are far inside never never never never never
| ever territory.
| wizeman wrote:
| > There are so many bits of state in a modern CPU, you are
| never going to solve the halting problem in practice even for
| a computer with a CPU only but no RAM or disk. And once you
| add some RAM, or haven forbid, a disk with a couple of
| terabyte of capacity, you are in never never never never
| never ever territory.
|
| Is that actually proven?
|
| Or is it possible that in fact, there might be an efficient
| algorithm for solving the halting problem in practice for
| computers, but it's just that we haven't discovered it yet?
|
| Just because the state space is big doesn't mean that there
| aren't shortcuts to figure out in a much more efficient way
| whether a program loops forever.
| rocqua wrote:
| It seems like we are effectively asking about the
| complexity of the halting problem of Turing machines with
| state space of N bits.
|
| It seems incredibly likely that this question has a
| complexity of O(exp(N)) which very quickly makes the
| halting problem infeasible to solve for modern computers.
| wizeman wrote:
| > the halting problem of Turing machines with state space
| of N bits
|
| That's an oxymoron, because Turing machines have a state
| space of infinite bits by definition. I understand you
| mean "deterministic finite-state machines" rather than
| "Turing machines".
|
| > It seems incredibly likely that this question has a
| complexity of O(exp(N)) which very quickly makes the
| halting problem infeasible to solve for modern computers.
|
| I mean, I agree that it's likely, but again, is this
| proven?
|
| I'm not aware that it is and therefore I think this is
| actually missing many of the points I mentioned.
|
| It could be false for all we know.
| rocqua wrote:
| I understand you mean "deterministic finite-state
| machines" rather than "Turing machines".
|
| I meant "Turing machines with X states and a type limited
| to Y bits, with X + Y = N". You can, of course, cast
| those as Deterministic Finite State Machines, but that
| will require incredibly many states.
| swid wrote:
| The busy beaver problem show how many states a program can
| have given a certain number of Turing instructions. We
| don't actually know the value of that number past five I
| think, but it already shows us brute force is out of the
| question.
|
| Could we solve it without brute force? Well, there are
| plenty of math problems we can't prove; we can write a
| program that only halts if counter example is found, and we
| would be unable to prove if the program halts. We can also
| write program that will halt if it can prove ZF set theory
| is false, but we already know it's impossible to prove that
| (it only possible to prove it is false). So symbolically
| deciding if a program halts is also impossible in the
| general case.
| wizeman wrote:
| > we can write a program that only halts if counter
| example is found, and we would be unable to prove if the
| program halts.
|
| Yes, but again, you're missing my point: you _would_ be
| able to prove if the program halts.
|
| It's just that in this particular case it would not
| necessarily be useful, because it just might happen that
| to solve the mathematical problem it would require more
| memory than you have available.
|
| So if the halting detector says: "the program halted due
| to running out of memory", the results would be
| inconclusive for answering a difficult mathematical
| problem.
|
| But that's not what most programs do, in fact, the vast
| majority of programs _would_ benefit immensely from
| knowing whether they will exceed some finite memory
| limit.
|
| > We can also write program that will halt if it can
| prove ZF set theory is false, but we already know it's
| impossible to prove that (it only possible to prove it is
| false).
|
| The same point above applies.
|
| > So symbolically deciding if a program halts is also
| impossible in the general case.
|
| No, it's not. You completely missed my point. It is 100%
| decidable. It's only undecidable if you're imagining
| running this program on an imaginary machine which cannot
| physically be constructed.
| swid wrote:
| You asked if we could solve if it overflows a computer
| efficiently (without overflowing it), so I am showing
| that is not possible generally due to limitations of
| math; both our current understanding but also because
| math also has limits. (Godel's incompleteness theorem.)
|
| Attempting to overflow it is impractical using cpu cache
| memory sizes, no hard drive needed. No matter how fast
| you go through states, there are too many possible states
| in the cpu, and too many states even a small program can
| iterate through while still eventually terminating.
| wizeman wrote:
| > You asked if we could solve if it overflows a computer
| efficiently (without overflowing it), so I am showing
| that is not possible generally due to limitations of
| math; both our current understanding but also because
| math also has limits. (Godel's incompleteness theorem.)
|
| What limitations of math exactly? What you're talking
| about is not related to efficiency of solving the Halting
| problem on a machine with a finite number of states.
|
| > Attempting to overflow it is impractical using cpu
| cache memory sizes, no hard drive needed. No matter how
| fast you go through states, there are too many possible
| states in the cpu, and too many states even a small
| program can iterate through while still eventually
| terminating.
|
| You are assuming that you have to simulate going through
| all the running states of a program to determine whether
| it halts or not.
|
| Why do you assume that you have to run a complete
| simulation of the program to analyze it?
| swid wrote:
| Because some problems we lack the ability to solve any
| other way. There are at least three problems which are
| famous, as described in the article I linked.
|
| see if you follow this one one, for a real computer:
|
| Suppose I pick a n bit key, and encrypt a phrase with it.
| I will provide you the plain text phrase and either a
| real or fake cipher text. You have to determine if a
| program which loops through all the keys will decrypt the
| ciphertext into a the phrase.
|
| Either you know weaknesses in the crypto system, or you
| have no choice but to iterate though each key to see if
| the output matches the phrase.
|
| There is a relationship with proof of work crypto as
| well. Even all the Bitcoin mining has trouble finding
| hashes with a certain number of leading zeroes, add a few
| more zeros and no one will ever find a working seed
| again. There is no way to speed up the process beyond
| iterating over the seeds.
|
| You can tie this to the halting problem by just saying to
| go back the key 0 once the key space is exhausted.
| wizeman wrote:
| > Either you know weaknesses in the crypto system, or you
| have no choice but to iterate though each key to see if
| the output matches the phrase.
|
| > There is a relationship with proof of work crypto as
| well. Even all the Bitcoin mining has trouble finding
| hashes with a certain number of leading zeroes, add a few
| more zeros and no one will ever find a working seed
| again.
|
| Yup, I know. But here's the part I think you're missing:
|
| None of those crypto systems you're mentioning are proven
| not to have weaknesses. In fact, weaknesses in hashing
| and encryption algorithms are encountered every now and
| then, even for those that have been considered state-of-
| the-art.
|
| So yes, I agree that in practice we don't know how to
| solve this problem efficiently yet. Which means it's
| _currently_ impractical, and would _currently_ requiring
| brute forcing through all the states.
|
| But I haven't seen a proof that it's not possible to find
| a practical algorithm.
| swid wrote:
| There is no proof of that for our crypto systems. There
| is a proof you can't prove ZF theory, but you also
| probably can't prove it false. If you hide that question
| in the thing you iterate over, you can take it on faith
| you won't find a counterexample and say the program won't
| terminate; but you haven't, and in fact cannot, prove it.
|
| There are also lesser problems that have been around a
| while which we haven't been able to prove, but might be
| provable... you can iterate over numbers looking for an
| counter example to the Collatz conjecture... I think
| that's the best example of a simple program that is
| outside our current knowledge of math (better than the
| crypto one), but maybe someday we will have an answer.
| wizeman wrote:
| Agreed!
| [deleted]
| svat wrote:
| This was very satisfying. The format works well, and makes sure
| you as reader are doing at least a minimum amount of thinking
| about the topics being presented. Even the pops and dings (if you
| have sound on) are fun :) This seems like a good format, and I
| see some other submissions from the site earlier:
| https://news.ycombinator.com/from?site=tigyog.app
|
| My only quibble with this one is with step 19 ("hangIf3") which
| relies on "infinity is not considered even" -- that is a
| distraction IMO, and if I understand correctly (have not thought
| about this carefully) the example could be replaced with
| something that involves an actual odd number.
| JadeNB wrote:
| > My only quibble with this one is with step 19 ("hangIf3")
| which relies on "infinity is not considered even" -- that is a
| distraction IMO, and if I understand correctly (have not
| thought about this carefully) the example could be replaced
| with something that involves an actual odd number.
|
| Indeed, you could just replace haltIf3or7 by haltIf3or5or7 (or
| whatever). (And, of course, whether or not infinity is
| _considered_ even cannot be deduced from the information given,
| so relying on this technicality here I 'd argue is beyond
| distracting and into the realm of so underspecified that it
| might be wrong. Or we could raise the same objection mentioned
| earlier, that it's not specified what kind of object `n` is,
| and numeric types often have only finitely many inhabitants
| ....)
| jamesfisher wrote:
| Oh hi, author here! I tend to agree on `hangIf3`. I have a
| habit of mixing in other kinds of puzzles. Sometimes they're
| fun, but sometimes they're irrelevant to the main point ...
| I'll rethink this one.
|
| p.s. thanks for the compliments on the format! If you (or
| anyone else) wants to create tutorials in this format,
| https://tigyog.app/ is a full platform for that! WYSIWYG editor
| and all :-)
| psychoslave wrote:
| Could it use more readable identifiers like `hang*if*3`?
|
| This stay valid js without mixed case found in usual code in
| the wild sticking to ascii. To my mind it feels like
| avoidable visual clutter, all the more in didactic work.
|
| Apart from that, it was nicely done all the way,
| congratulations and thanks for the pleasing moment.
| alcover wrote:
| for (let i = 0; i <= i; i++); // hmm
|
| If _canBeRegex_ is smart enough to parse `return
| s.length === 3;`
|
| it could also detect the for-loop has no side-effect and skip
| it..
| deltaseventhree wrote:
| revskill wrote:
| Thanks ! This is inspirational. Could i know which
| libraries/frameworks do you use for the tutorial ? Thanks.
| schoen wrote:
| The author said elsewhere in this thread that he uses his own
| invention, https://tigyog.app/.
| constantcrying wrote:
| I read through it, I think the idea of having regular attention
| checks is quite interesting. (Personally I found them to be a bit
| too frequent). But there are also things like:
|
| >If you implement halts using canBeRegex, you've got a proof by
| contradiction that canBeRegex can't exist.
|
| This is plainly false. I can also implement halts using addition,
| that does not imply that addition can not exist.
|
| The author is clearly writing for a more general audience and _I_
| know what he means. But I think that if you are writing for a
| more general audience you should be _more_ careful in your
| language and I think the article is sonewhat lacking there.
| Gehinnn wrote:
| This is not false.
|
| If you can implement halt using addition, addition cannot
| exist.
|
| Ex falso quod libet - if you can prove a single false
| statement, every statement is true.
| tromp wrote:
| The author uses "implement halts using X" in the sense of
| reducing the halting problem to the problem of X
| (technically, using Turing reduction).
|
| Not in the sense of solving the halting problem with a
| function which has an occurrence of X.
| constantcrying wrote:
| Here is a solution of the halting problem which is
| implemented using addition:
|
| function haltWithAdd(code) { var a = 1 + 2; return
| halt(code); }
|
| This clearly solves the halting problem (we assumed the
| existence of halt as the first step of our proof by
| contradiction). And it clearly uses addition (surely you can
| find where it does). By the logic (as written) of the article
| the implication is that '+' does not exist.
| akdas wrote:
| > This clearly solves the halting problem (we assumed the
| existence of halt as the first step of our proof by
| contradiction).
|
| Implementing the halting problem using addition does not
| assume the existence of halt. It assumes the existence of
| addition, and you use that to show that halt must also
| exist. And that's not possible to do.
|
| To implement halts using addition, you need to:
|
| - Take the input to halts, namely the source code to a
| program.
|
| - Convert that into an input to addition, namely two
| numbers.
|
| - Call into addition, which adds the numbers.
|
| - Use the result of that addition to infer whether the
| original program halts or not.
| constantcrying wrote:
| Why are you debating the validity of the argument?
|
| The code very clearly:
|
| - is an implementation of the halting problem. If halt
| existed it would undoubtly solve the halting problem
|
| - it uses '+'
|
| You are prentending I didn't know what the author meant.
| The point is that what the author _says_ is simply not
| true, if you take the statement as written. What he wants
| to show is that if canBeRegex is computable then halt is
| computable, but that isn 't what he is saying.
| gweinberg wrote:
| You're not getting it. A "CanBeRegex" function which
| (always) correctly indicates whether a code block could be
| replaced by a regex MUST be able to solve the halting
| problem. The same is not true for addition.
| constantcrying wrote:
| You are not getting it. What you are saying _is not_ what
| the author says.
|
| I know what the author _wants_ to say, but it isn 't what
| he actually says.
| Gehinnn wrote:
| If you assume that halt does exist to make your solution
| valid, then you can derive any statement (including that
| addition does not exist), because that assumption is false.
|
| In general: Let P be a statement. Lets assume that the
| Turing Machine T solves the halting problem - let's call
| this assumption H.
|
| Now we assume P is false. Now we have a contradiction - H
| says that T solves the halting problem, but we know that
| there is not Turing machine that can solve the halting
| problem! Thus P must be true!
|
| Notice that P is arbitrary.
|
| Plainly speaking: If you assume that the halting problem
| can be solved, I can formally prove you that addition does
| not exist (i.e. there is no operation +, such that it forms
| a group on Z)
|
| > By the logic (as written) of the article the implication
| is that '+' does not exist.
|
| This is actually true. With that strong assumption, + does
| not exist.
| constantcrying wrote:
| I am at a total loss here. Do you not understand that _it
| is the point_ that my reasoning is invalid?
|
| Why are you pretending I do not understand the basics of
| logic while missing the actual argument by a mile?
| Gehinnn wrote:
| Your reasoning is valid, and I do have the impression
| that you should refresh your knowledge about logic.
| constantcrying wrote:
| As a reminder: a proof by contradiction works by assuming
| first the negation of a given statement. If from that you
| can deduce a falsehood you know that statement is true.
|
| In the article the author says: "If you implement halts
| using canBeRegex, you've got a proof by contradiction
| that canBeRegex can't exist."
|
| Which is plainly a false statement. What he means is the
| following statement: "If you show that if canBeRegex is
| computable then halts is computable, you've got a proof
| by contradiction that canBeRegex can't exist."
|
| Those two statements are clearly not equivalent. And only
| the second one is true.
| daveguy wrote:
| The answer to at least one of these is "Yes, it solves the
| halting problem."
|
| This may be a semantic issue, where the author is asking if it
| accurately predicts halting for a single input.
|
| But that is _not_ the definition of the "halting problem".
|
| The format is effective and interesting. But it does need some
| edits/clarifications.
___________________________________________________________________
(page generated 2022-11-25 23:00 UTC)