[HN Gopher] New math book rescues landmark topology proof
___________________________________________________________________
New math book rescues landmark topology proof
Author : theafh
Score : 246 points
Date : 2021-09-09 16:00 UTC (6 hours ago)
(HTM) web link (www.quantamagazine.org)
(TXT) w3m dump (www.quantamagazine.org)
| leephillips wrote:
| If I understand this article, it's about a 500-page book that is
| devoted to one proof of one theorem in topology. I find that
| amazing.
|
| And cheers to _Quantum Magazine_ for regularly publishing
| popularizations of research mathematics. I know many, at times,
| take issue with their simplifications and framing, but they're
| trying, where almost no one else with their reach covers these
| areas at all.
| liversage wrote:
| It's amazing how math can require so much work for what appears
| to be a single result, but I spend an entire semester proving a
| single mathematical theorem and to do that I had to work
| through an entire textbook, so I don't think it's that unusual.
| PartiallyTyped wrote:
| I am not a mathematician, though I do very applied math (ML),
| I took a course this semester that is intended for Pure Math
| MSc, called Advanced Vector spaces, having only done some
| linear algebra and calculus at the undergraduate level, some
| abstract algebra and some geometric algebra.
|
| I am consistently in awe of how well mathematicians have
| stacked layers of abstraction one on top of the other, and
| how many different ideas end up being very related to one
| another. Maybe I am romanticised it and the fact that I
| regret not going for pure math, but there is beauty in all
| those abstractions that fit together so nicely.
| kevinventullo wrote:
| As a former mathematician, I would say the stacked layers
| get a lot messier closer to the cutting edge.
| taeric wrote:
| I realize I have little chance of actually being able to
| understand all of it. Still, I'd gladly have bought it at a
| lower cost. :(
| mindcrime wrote:
| I wonder how many other results in maths require an entire book
| length treatment (or multiple book length treatments)? The one
| that jumps out to my mind is Godel's Incompleteness
| Theorem(s)[1]. I know there are at least a couple of complete
| books dealing exclusively with this result.
|
| [1]:
| https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
| srcreigh wrote:
| I suspect someone could write a book (or an HN comment at
| least) about the popular confusion about Godel's 1st
| incompleteness theorem. :-)
|
| In fact we (my brother and I) agreed that it bears a striking
| resemblance to Turing's halting problem: They both seem to
| show what computers or math can't do, but in fact both
| theorems only reveal limitations of certain models of
| computers and/or math. We both have undergrad degrees in math
| fwiw.
|
| Both theorems use a clever logical trick. For Godel's
| theorem, it is an encoding of "This sentence is false". For
| the halting problem proof, Turing first fixes the algorithm,
| asks the algorithm what its result will be, then switches the
| correct answer, forcing the algorithm to produce the
| incorrect answer.
|
| Would the proof completely fall apart if the algorithm could
| update itself to handle the new information? The only thing
| stopping it is that in the formulation of the problem, the
| algorithm is assumed to not be able to change with new
| information (ie it's not an "online" algorithm). Similarly,
| what if the model of math had mechanisms for recognizing
| paradoxes, or circular logical requirements, etc. I don't
| think either of those theorems would work anymore.
|
| Anyway, I'm not saying the Halting problem would be easy to
| compute with the right model. If we had an efficient
| algorithm to solve the halting problem, we could solve tons
| of math problems easily. We could write some code to
| enumerate any countably infinite sequence, checking for the
| presence of some condition, and make inferences based on if
| it halts or not. For example, we could solve the Collatz
| conjecture by just writing code to halt if any number's
| sequence doesn't eventually go to one, and checking whether
| that loop ever halts.
|
| My punchline is, I think it's beautiful that even the
| undecidable, uncomputable problems are only that way because
| we are lacking a better model. Or at least that's my current
| understanding of the situation :-)
| geofft wrote:
| Suppose the algorithm could update itself with new
| information. Then you can formulate it in one of two ways:
|
| 1. The process of updating the algorithm can, itself, be
| described by a Turing machine. Think of it like an emulator
| / debugger (since a universal Turing machine exists, i.e.,
| you can write an evaluator for a Turing machine as a Turing
| machine). The outer Turing machine can run the inner one
| partially, realize it needs changes, update it, and then
| continue running it and return its output.
|
| But then the outer Turing machine, which is self-contained,
| is itself subject to the halting problem, and you can
| construct the counterexample based on it. You've just made
| the machine bigger.
|
| 2. The process of updating the algorithm cannot be
| described by a Turing machine. Perhaps it involves some
| sort of higher form of intelligence which itself cannot be
| represented as a Turing machine (so, if you want to say "a
| human updates it," you're working on the assumption that
| the human brain cannot be implemented/emulated - however
| slowly - on a Turing machine, and the decisions of how to
| update the algorithm require this supra-Turing
| computational power).
|
| But then, as before, the actual machine is this combination
| of the extra intelligence and the Turing machine. You've
| made some stronger non-Turing machine to solve the problem,
| and the statement "No Turing machine can determine whether
| any arbitrary Turing machine halts" still holds true.
| You've introduced a new class of machines with additional
| power over Turing machines. And the halting problem now
| applies to these machines when evaluating whether a machine
| _of their own class_ halts, even though they can determine
| whether simpler machines halt. Your human-Turing mechas can
| solve the halting problem for Turing machines, but they
| cannot solve it for other human-Turing mechas. See
| https://en.wikipedia.org/wiki/Turing_jump
|
| In fact, this model, applied to Godel's theorem, was
| explored in Turing's own Ph.D. thesis: https://en.wikipedia
| .org/wiki/Systems_of_Logic_Based_on_Ordi...
| srcreigh wrote:
| Thanks for response! I will definitely read about Turing
| jumps :)
|
| I'm not sure I understand your 1. Reading the Halting
| problem undecidability proof, it goes like this:
|
| Suppose machine H can solve the halting problem. We
| construct special machine M which calls H(M), and negates
| the output. Then when we run H(M), it halts if M doesn't
| halt, and doesn't halt if M halts, hence M doesn't exist.
|
| On the other hand, if H could self-modify, we would have
| a sequence H_0,H_1,... of machines based on its
| modifications. Ie, M is written with H_i, but later we're
| calling H_j(M). No guarantee that our negation still
| works.
| geofft wrote:
| How do you generate the sequence H_0, H_1, ...?
|
| Specifically, is there a Turing machine G that can
| generate H_0, H_1, ..., onwards, perhaps taking in some
| input along the way?
|
| If so, then you can construct a self-contained Turing
| machine Z that takes a machine M as input, which works as
| follows: it calls G to generate H_0, runs H_0 for a bit
| on M, then calls G to generate H_1, runs H_1 for a bit on
| M, and so forth. This is a _single_ Turing machine which
| does not need external modification. Internally, it works
| by having an inner Turing machine that it modifies. But Z
| itself doesn 't change.
|
| Then you can define M = "if Z(M): loop", call Z(M), and
| get your contradiction back.
|
| If no, then you've built a more-powerful-than-a-Turing-
| machine machine of some sort.
| thethirdone wrote:
| > Would the proof completely fall apart if the algorithm
| could update itself to handle the new information? The only
| thing stopping it is that in the formulation of the
| problem, the algorithm is assumed to not be able to change
| with new information (ie it's not an "online" algorithm).
|
| Simple put the halting problem means that you cannot make a
| program (a static piece of code) that can determine in a
| finite amount of time if another program halts operating on
| some input. I.E. program P takes program H and input I and
| answers if H run with I halts. I honestly don't know what
| part of the computational model could be changed to make
| that not true (besides Oracles or Church-Turing thesis
| being wrong).
|
| Also, I don't think "online algorithm" means what you think
| it means. Online algorithms have no more computational
| power than offline ones; they just can get started with
| only partial information.
|
| > Similarly, what if the model of math had mechanisms for
| recognizing paradoxes, or circular logical requirements,
| etc. I don't think either of those theorems would work
| anymore.
|
| I don't know what a model of recognizing paradoxes would
| look like. The 1st incompleteness theorem is based on only
| a few simple axioms and reasonable rules of deduction.
| There isn't much wiggle room to make a mathematical model
| that isn't bound by it, but is useful.
| srcreigh wrote:
| > The halting problem means that you cannot make a
| program (a static piece of code) that can determine in a
| finite amount of time if another program halts operating
| on some input.
|
| Interesting. My first thought was, couldn't we just
| create programs that are designed to keep growing? Isn't
| that both completely possible and more powerful than the
| model of computing where programs are static?
|
| I realized it may not be entirely possible to build an
| actual machine which hosts endlessly growing code. There
| are probably laws of physics which put some type of limit
| on how big things we build can get.
| EamonnMR wrote:
| I like to think of the halting problem as similar to the
| inability to predict the future. In order to predict the
| future you'd need a model of the whole universe, including
| the model. Once you think of it like that you realize that
| you can't beat halting in the classical universe.
| jcranmer wrote:
| Godel's Incompleteness Theorem and the undecidability of
| the halting problem can in fact be proved using each other,
| so there is good reason to think that there is a deeper
| connection.
|
| What you suggest isn't actually possible, and to explain
| why, it's worth stepping back a little bit. Originally,
| there was a program to formalize mathematics that came to
| be known as naive set theory. This theory ran into a major
| roadblock when Russell proposed a paradox: does the set of
| all sets that do not contain themselves contain itself? The
| solution to this, as found in modern set theory, is to very
| carefully construct sets in such a way that no set can
| contain itself in the first place, and so the very question
| isn't expressible in the logic of modern set theory.
|
| The underlying point of both Godel and Turing is, as you
| say, constructing a similar statement akin to "this
| statement is false." But more specifically, what they
| actually did was to show that, once you meet some
| relatively low barriers, it is _impossible_ to make a
| system that omits that construct. In effect, what is done
| is to encode (respectively) proofs and programs into
| integers, and then using this encoding, shows that you can
| always create a self-reference that blows up. Yes, if you
| change the encoding, it requires a potentially different
| number to create the self-reference, but the point is that
| so long as there is a bijection to the natural numbers,
| then the problematic self-reference must exist.
|
| And, as you may be able to reflect upon, everything we as
| humans can write or speak _can_ be reduced to a finite-
| length string of a finite-symbol alphabet, so everything we
| create must be bijectable with the natural numbers if
| infinite.
| kevinventullo wrote:
| I would say that most hallmark results in theory-heavy
| mathematics (number theory, algebraic geometry, topology,
| ...) could stretch to at least a book length treatment if you
| wanted a mostly self-contained proof.
|
| It sounds like the main difference with this theorem is that
| the writeup of the original proof was so complicated, terse,
| and error-prone, that very few experts even understood it.
| karatinversion wrote:
| I don't think the incompleteness theorems are nearly as wordy
| - this translation of Godel's original paper is under 40
| pages after subtracting the introduction (not part of the
| original).
| jcranmer wrote:
| The classification of finite simple groups [1] is apparently
| ~11-12 books long (when it's finished). Yes, several books
| for a single theorem.
|
| [1] https://en.wikipedia.org/wiki/Classification_of_finite_si
| mpl...
| Denvercoder9 wrote:
| I don't wanna know how much books the classification of
| finite complex groups is gonna take.
| lou1306 wrote:
| Godel's theorem is actually not _that_ complex. Sure, you may
| think of "Godel, Escher, Bach", but that book is stuffed
| with sooo much material that is only tangentially related to
| the theorem.
|
| Of course the _consequences_ of that theorem are legion, and
| they do warrant many books worth of discussion. But the proof
| itself can fit in a long-ish blog post, I think.
|
| Fermat's last theorem, on the other hand...
| pdpi wrote:
| GEB is a book about intelligence/consciousness. The
| incompleteness theorem is incidental -- it's only there to
| illustrate the notion of self-referentiality, and I'm not
| sure it does a particularly good job at it.
| CamperBob2 wrote:
| The most recent case similar to this one is Mochizuki's work
| on the ABC conjecture, which has still not reached the point
| where it's accepted by the majority of the community due to
| its complexity. There are lots of parallels between Mochizuki
| and Freedman. They both had to break a lot of new ground, and
| they both had trouble communicating their results to enough
| people to become part of the official canon of mathematics
| that can actually be taught to others.
|
| Freedman at least convinced a few key people, but AFAIK
| Mochizuki's work has never been understood by anyone not
| named Mochizuki. It may be complete BS, it may be
| earthshakingly brilliant, but either way it will evidently
| take someone's entire career to prove it.
| iainmerrick wrote:
| I believe the latest news on Mochizuki's work is that Peter
| Scholze and Jakob Stix identified a fatal flaw, and
| Mochizuki has not convincingly rebutted it. See e.g. the
| first item here:
| https://www.math.columbia.edu/~woit/wordpress/?p=12429
|
| Both the claimed proof and the disproof have now been
| published in journals, and neither has been retracted!
| That's a rather unusual situation to say the least. But I
| think people in the field generally agree that the proof is
| flawed and unfixable.
| [deleted]
| ntlang wrote:
| Experts have been skeptical for quite some time. You can
| see discussion on the exact point Scholze (PS) brings up
| in the following blog post (the post and many comments
| are by prominent number theorists)
|
| https://www.galoisrepresentations.com/2017/12/17/the-abc-
| con...
|
| One quote from the post I find worth repeating:
|
| > This post is not about making epistemological claims
| about the truth or otherwise of Mochizuki's arguments. To
| take an extreme example, if Mochizuki had carved his
| argument on slate in Linear A and then dropped it into
| the Mariana Trench, then there would be little doubt that
| asking about the veracity of the argument would be beside
| the point. The reality, however, is that this description
| is not so far from the truth.
| dls2016 wrote:
| My buddy had "beat a Fields medalist at a foot race" on his CV
| for a while after completing the Pier to Peak half marathon
| faster than (a ~60 year old) Prof Freedman.
|
| I'd run to Goleta beach playground with my kids and would see
| Freedman wander down from Station Q and bust out a few chin ups
| on the monkey bars.
|
| I hope I'm in that kind of shape!
| korbonits wrote:
| What about this guy's book?
|
| https://math.uchicago.edu/~dannyc/books/4dpoincare/4dpoincar...
| kurthr wrote:
| "I probably didn't treat the exposition of the written material
| as carefully as I should have," said Freedman.
|
| That's a pretty sad (if self aware) statement for someone who
| gets paid $1M/yr to find arrangements which might make quantum
| decoherence effects lower for computation ... or not. The editors
| certainly won't make that much, but at least there's an afterward
| by Freedman.
| hxksleo wrote:
| I'd be interested in whether proofs like these will be formalized
| in proof assistants that can be checked with computer code, so
| that it removes doubt of error. It's something in math I'd like
| to see become more prevalent.
| thaumasiotes wrote:
| > I'd be interested in whether proofs like these will be
| formalized in proof assistants that can be checked with
| computer code, so that it removes doubt of error.
|
| This does not remove doubt of error.
| openasocket wrote:
| It's possible, but difficult. Proofs written by humans tend to
| not include a bunch of relevant details and assumptions. They
| consist of lines like "We have some property/object/logical
| statement X, then by theorem Y we have Z", but often don't
| state how exactly they are using theorem Y to get result Z.
| There's often some additional algebraic or logical
| manipulation, or some simplification of terms, or implicitly
| using some other theorem or lemma that seems obvious to the
| writer.
|
| Getting a computer to automatically find proofs of statements
| is difficult (impossible in general), and I wouldn't be
| surprised if converting a standard human-written proof into a
| formal proof system is just as hard.
| zozbot234 wrote:
| It can be done, this is exactly what homotopy type theory is
| for.
| mahogany wrote:
| Something at this level is still far away, but it should be
| possible in theory. Last time I checked, there was still a lot
| of work to be done with formalizing geometric objects in Lean
| (the only proof assistant I have experience with). We are more
| in the stage of formalizing proofs from undergraduate books,
| whereas the proof of the 4-dimensional Poincare conjecture is
| multiple orders of magnitude more difficult.
|
| From my understanding and limited experience, formalizing
| complex objects is quite difficult. A long proof where the
| objects are integers or algebraic relations might be easier
| than even _defining_ a manifold. For example, with Lean, it was
| a lot of work to even define a perfectoid space -- which, to be
| fair, is a complicated object -- much less, say anything
| interesting about it.
|
| If someone has more experience with other proof assistants, I'd
| be interested to hear about how far away a proof like
| Freedman's would be. My understanding is that each one (coq,
| agda, lean, etc) has certain drawbacks or benefits and can
| describe some concepts more easily than others, but "high-
| level" proofs are few and far between for all. For example, Coq
| has a verification of Feit-Thompson Theorem which is really
| cool, but I don't think there are many complex objects in
| there. As far as I know, it's groups, linear algebra, generator
| and relation computations -- all of which are pretty well
| handled by computers. On the other hand, to even begin a
| verification of Fermat's Last Theorem, you would have to first
| build up the entire scaffolding of modern (well, second half of
| the 20th century at least) algebraic number theory, quite a
| feat in itself.
| roywiggins wrote:
| The problem is, once you have a proof that everyone agrees is
| correct, what's the incentive to painstakingly translate it
| into something a computer also agrees is correct?
|
| Yes, you might turn up some "bugs" in the proof, but proof bugs
| can almost always be ironed out. It's a very rare proof that is
| accepted and then falls apart later. If proofs were as "buggy"
| as computer programs- falling apart all the time- then there'd
| be a much bigger appetite for proof checkers. But as it is,
| mathematical peer review seems to work awfully well.
| mahogany wrote:
| I agree that it's rare that a proof is accepted and falls
| apart later, but it does happen occasionally.
|
| Somewhat famously, Voevodsky turned his attention to
| mathematical foundations, including proof assistants, at
| least in part because he was spooked that a part in one of
| his big papers was later shown to be incorrect.
| (Interestingly, someone much less known pointed this out in a
| preprint but it was, from my memory, largely ignored for some
| time -- who's going to believe a nobody versus a Fields
| Medalist?) There's a talk that Voevodsky gives at (I think)
| IAS, where he mentions this.
|
| ---
|
| Edit: okay, here are the slides: https://www.math.ias.edu/~vl
| adimir/Site3/Univalent_Foundatio...
|
| The whole talk is interesting, but slides 8-13 talk about the
| topic at hand. In particular:
|
| > This story got me scared. Starting from 1993 multiple
| groups of mathematicians studied the "Cohomological Theory"
| paper at seminars and used it in their work and none of them
| noticed the mistake.
|
| > And it clearly was not an accident. A technical argument by
| a trusted author, which is hard to check and looks similar to
| arguments known to be correct, is hardly ever checked in
| detail.
|
| ...
|
| > It soon became clear that the only real long-term solution
| to the problems that I encountered is to start using
| computers in the verification of mathematical reasoning.
| zozbot234 wrote:
| There's plenty of incentive to do that actually, since a
| translation of a proof that had not been formalized before
| yields a much clearer proof than anything a human could write
| on her own. This is doubly true when some "bugs" are found
| and "ironed out" in the process - even seemingly trivial bugs
| can trip up the reader and obscure interesting sr structure.
| In fact, a truly novel formalization is pretty much
| publishable work in and of itself. The problem is not
| incentives; it's more of a lack of willingness to work in an
| area that's very fragmented still and makes it way too hard
| to reuse existing results.
| spekcular wrote:
| This has been a long time coming. There is an infamous
| MathOverflow thread about the proof (also linked in the article),
| with the following comment summarizing the state of affairs ~10
| years ago:
|
| > There is no other evidence. In fact there is absolutely no
| evidence what so ever. I have never met a mathematician who could
| convince me that he or she understood Freedman`s proof. I
| attempted to read that monstrosity of a paper a number of times
| by myself and quite a few times with a group of other
| mathematicians. We never were able to finish checking all of the
| details. Such seminars always ended before we could make it
| through even half of his paper. No other expositions on the
| subject seem to be any better. It is truely an odd state of
| affairs that after all of these years no one has managed to write
| a clear exposition of this so called proof, and that no one seems
| to question the claim that there ever was a proof. I remember
| thinking as a young mathematician either this "proof" is sheer
| nonsense or someone will eventually write out a clear and
| detailed explanation. As of April of 2011 I have understood that
| the so called proof is full of errors and they can not be fixed.
| I mentioned this to several mathematicians during the summer of
| 2011 and I believe these conversations are directly linked to the
| dialogue seen here on math overflow.
|
| Also on that thread, Brendan Guilfoyle raised the issue of
| writing a complete book-length treatment of the proof. I found
| the following response an interesting commentary on incentives in
| academia.
|
| > Promotion committee: "What have you done in the last 5 years?"
|
| > B.G.: "I've been writing a book about a paper published in
| 1982."
|
| > Promotion committee: "How is that groundbreaking overachieving
| hyped research?"
|
| > B.G.: "It is not, but I am preserving the knowledge of the
| elders."
|
| > Promotion committee: "Who told you that was a good idea?"
|
| > B.G.: "A guy on the internet, and 24 people liked his
| suggestion."
|
| > Promotion committee: "We will call you, don't call us. Good
| bye."
| Koshkin wrote:
| > _what so ever_
|
| I've always thought it was connected.
| auntienomen wrote:
| That comment on incentives is funny, but I'm not sure it
| captures the situation here: The mathematicians who wrote The
| Disc Embedding Theorem have been quite successful. All are on
| the tenure ladder (or tenured already) and publishing novel
| research in a field they revitalized.
|
| I can see how sinking time into this is risky, but that's
| generally true of choosing a field of research. I think the
| authors deserve credit for doing it well. Likewise, Peter
| Teichner should get a little credit for helping to create the
| space for them to do so.
| spekcular wrote:
| I agree, actually. I think a book published by a major
| academic press is a plus for any tenure/promotion committee.
| Further, this is definitely not just "expository" work; as
| the article notes, they had a bunch of gaps/errors to fix. I
| expect the importance of those contributions would be
| reflected in strong recommendation letters for promotion.
| OGWhales wrote:
| > (also linked in the article)
|
| Would never have seen that without your telling me. It was well
| hidden until you moused over it
| agf wrote:
| Thread in question:
| https://mathoverflow.net/questions/87674/independent-evidenc...
| cschmidt wrote:
| I do find it amazing that mathematicians seem to have almost
| infinite time to work on problems, and so much freedom to choose
| what they're working on. How great that a big group of people can
| collectively decide to spend years creating a book like this.
| Koshkin wrote:
| What is amazing to me is that most people think they do not
| have such freedom.
| maxFlow wrote:
| Makes you wonder what's the most impactful thing you could build
| given the possibility of seven years of isolation and single-
| minded focus.
|
| Would be interesting to know, at which point did Freedman decide
| to commit full time to his research---based on what, an
| intuition? And how did he decide to stop at 7 years and not 4, 9
| or 10?
|
| This is advanced stuff for me but I'll still be getting a copy of
| _The Disc Embedding Theorem_ to support this kind of niche
| research. Findings like these could be stepping stones towards
| bigger discoveries in the future.
| martincmartin wrote:
| So who gets credit for proving it? Freedman, the original author
| from 1981; or the authors of this book?
|
| If Freedman's proof has so many gaps, should it be considered
| more of a proof outline, or a motivation for believing a
| conjecture, and the current book be considered the actual proof?
|
| Or does Freedman get the glory? In that case, there's not much
| incentive for people to do what these book authors did. Which I
| suppose is why it took 30 years, and is only happening now
| because they "were captivated by the beauty of Freedman's proof
| and wanted to give it new life. "
| auntienomen wrote:
| It's not a strict exclusive-or choice. Credit will accrue to
| both Freedman and the authors of this book, because both
| advanced the state of the subject. And eventually perhaps
| further credit to someone who publishes a machine-verifiable
| proof, or finds a simpler way to understand the theorem.
| rPlayer6554 wrote:
| According to a sister comment, the paper contains a meta-proof
| that shows that many of the gaps do not matter in showing the
| equivalence.[0] I don't know if this is true or not, I'm just
| connecting the discussions.
|
| [0] https://news.ycombinator.com/item?id=28473319
| pfortuny wrote:
| Well, thank God lots of scientists (mathematicians in this
| case) understand that transmission of knowledge is as important
| (or more so) as novelty. And that is their incentive.
| ChrisKnott wrote:
| This distinction has been called an "open _exposition_
| problem ", versus an "open research problem".
|
| Once Freedman's proof was accepted, it was no longer an open
| research problem; on the publication of this book, it will
| (hopefully) no longer be an open exposition problem.
|
| The phrase was coined by Timothy Chow in his paper A
| Beginner's Guide To Forcing [0];
|
| > _" All mathematicians are familiar with the concept of an
| open research problem. I propose the less familiar concept of
| an open exposition problem. Solving an open exposition
| problem means explaining a mathematical subject in a way that
| renders it totally perspicuous. Every step should be
| motivated and clear; ideally, students should feel that they
| could have arrived at the results themselves."_
|
| [0] https://arxiv.org/abs/0712.1320
| eigenket wrote:
| Freedman gets the credit for the bits of the proof that are
| his, the authors of this book get the credits for the novel
| stuff they add.
| phaedrus wrote:
| The part I find most interesting is that (as I understand the
| article), Freedman's proof relies on a meta-proof that details
| not worked out in the base proof could be ignored:
|
| "But there were places where he couldn't quite complete the
| picture -- as if he were creating a portrait and there were some
| aspects of his subject's face he couldn't see. His last move,
| then, was to prove that those gaps in his picture -- the places
| he couldn't see -- didn't matter from the standpoint of the type
| of equivalence he was after. That is, the gaps in the picture
| could not possibly prevent the Casson handle from being
| homeomorphic to the disc, no matter what they contained."
| dodobirdlord wrote:
| A common approach to proving a broad claim when there's no
| straightforward proof is to figure out all of the entities that
| have the potential to produce a counterexample, break them up
| into a finite list of categories, and go through category by
| category proving, often with very different approaches
| depending on the category, that each category cannot contain a
| counterexample.
|
| A major theorem proved in this style was the classification of
| finite simple groups. Not only was it done in this style, it
| frequently serves as a major component in other proofs of this
| style, since theorems about other kinds of groups can sometimes
| be re-expressed as claims about finite simple groups, and then
| checked category by category.
|
| https://en.m.wikipedia.org/wiki/Classification_of_finite_sim...
|
| EDIT: This is also part of why sometimes the first proof of a
| theorem will be very long, but someone will later use the
| insight of the first proof to produce a much shorter proof. The
| first proof will basically be a proof that the general claim
| breaks down into a (possibly very long) list of separately
| probable lemmas, followed by a proof of each of the lemmas.
| These may not overlap very much, leading to proofs that are
| very long and take a lot of background to understand. Once the
| result is known to be true, it may be worth the effort for the
| original author or someone else to refactor the "proved it
| can't be disproved" proof into a much shorter direct proof.
| danabo wrote:
| The book in question is called "The Disc Embedding Theorem"
| (https://global.oup.com/academic/product/the-disc-embedding-t...)
| lostmsu wrote:
| Math is the most formal science there is. Why don't serious math
| journals require programmatically checked proofs for all
| publications, and instead rely on what essentially is a code
| review for a very large change request with extremely convoluted
| code to be vetted bug free by only a handful of experts?
|
| P.S. Reviews are still needed to check novelty etc.
| andyjohnson0 wrote:
| (Meta but I don't think the parent comment deserved to be
| downvoted. Maybe it's arguably wrong but it prompted replies
| that increased my understanding of the practice of maths.)
| lostmsu wrote:
| Thanks for this remark.
|
| It is amusing to see this behavior, considering the comment
| is a _question_ , to which responses seem to favor answer
| "no".
|
| The only part of the comment, that is actually a statement
| about peer review in mathematics being as bad or even worse
| in terms of assessing correctness, than reviewing large code
| changes. And nobody argues against that.
| nerdponx wrote:
| Is it possible to even check such a proof programmatically? Do
| we have that kind of technology? Do you also have to review the
| proof checking code to make sure you didn't make any errors in
| transcription from the original paper?
| lostmsu wrote:
| > Do you also have to review the proof checking code to make
| sure you didn't make any errors in transcription from the
| original paper?
|
| The point of the checked proof is that your final theorem is
| almost definitely correct (you did not make mistakes in your
| deductions, unless the proof checker has a serious bug) if
| you can convince computer it derives from your axioms.
|
| You can convince humans without your proof being correct
| thought, because they might not spot an error.
| broken_symlink wrote:
| There is a language called lean which is popular among
| mathematicians. Here is an article:
| https://www.quantamagazine.org/building-the-mathematical-
| lib...
|
| A more recent article as well:
| https://www.quantamagazine.org/lean-computer-program-
| confirm...
| nerdponx wrote:
| My question was specifically referring to these huge 500
| proofs that only a handful of people have even attempted to
| understand. What is the upper limit of what our current
| state-of-the-art proof assistants can handle?
| klyrs wrote:
| > ...what essentially is a code review for a very large change
| request with extremely convoluted code to be vetted bug free by
| only a handful of experts?
|
| I kinda feel like this question answers itself. On the other
| hand, I can't imagine this quagmire lasting another century,
| but the machine-assisted proof ecosystem hasn't reached
| adolescence yet.
| jhgb wrote:
| > Math is the most formal science there is.
|
| That's assuming you ascribe to viewing math as science.
|
| > Why don't serious math journals require programmatically
| checked proofs for all publications
|
| Presumably it has something to do with why formalized
| mathematics is rare: it's hard. Even relatively simple concepts
| become quite hard to deal with when fully formalized. I suspect
| that mathematics on this level is simply not amenable to the
| treatment that you envision (at least today).
| jacobolus wrote:
| > _Why don 't serious math journals require programmatically
| checked proofs_
|
| ... Because that would take 2+ orders of magnitude more work,
| and nobody has the time/attention budget for it.
|
| This is sort of like asking: why did you single person write
| this 10-page short story instead of making a Hollywood-style
| feature film?
| lostmsu wrote:
| Would it though? Are proof checkers much more complicated
| than LaTeX?
|
| If mathematicians are in certain ways similar to developers,
| I'd expect them to hate fighting with the layout of their
| papers, and love fighting with a proof checker for the
| formulas.
| [deleted]
| jacobolus wrote:
| Maybe half (?) of the basic undergraduate math curriculum
| has been formally verified, after decades of effort by
| large numbers of researchers. The proof under discussion
| here is significantly more complicated than anything that
| has ever been formally verified, as far as I understand.
| Disclaimer: I don't know much about formal verification of
| mathematics, only what I have read from time to time on the
| internet.
| klyrs wrote:
| > Are proof checkers much more complicated than LaTeX?
|
| In terms of lines of code, no. In terms of the learning
| curve, yes -- latex does a very good job of getting out of
| the way and letting a mathematician just write in English,
| where proof assistants require rigid structure that doesn't
| remotely resemble how (most) mathematicians think. In terms
| of runtime, oh my god, get out of town.
| lostmsu wrote:
| > In terms of runtime, oh my god, get out of town.
|
| I think you're mixing two different things here. Runtime
| is large for proof assistants, e.g. programs that can
| actually generate pieces of proof for you. Specifically
| the generation part. Verification of a complete proof
| were all the steps are provided like you would do in a
| paper should not, AFAIU, take a long time.
| adgjlsfhk1 wrote:
| The problem is that the way people write proofs, you
| don't go axiom by axiom. You might evaluate an integral
| in one step, or make an argument that some sequence is
| obviously asymptotically less than a function. Pretty
| much every step in a human proof requires a computer
| proof generation step to verify.
| lostmsu wrote:
| There might be a chicken and an egg problem. If knowledge
| of theoretical mathematics would be designed like a
| framework with good documentation what you mention would
| be solved by simply referring to the corresponding part
| of the framework.
|
| But to get that more people would need to be involved
| into transcribing proofs into formal language. Ideally,
| everyone. This is a higher standard, and somebody needs
| to ask for it.
| klyrs wrote:
| Fair point. I'm coming from the perspective that
| mathematicians don't like to fill in pesky details that
| experts in their respective fields can figure out in an
| hour or ten.
| qsort wrote:
| For the same reason Google does not formally prove the
| correctness of their code. It's technically feasible but
| impossibly complex to actually achieve.
|
| Fully formalizing a proof with a tool like Coq or Isabelle is
| in and of itself a nontrivial problem.
| lostmsu wrote:
| There are multiple ways in which Google Mail while behaving
| incorrectly still be usable. But a theorem is not only
| useless if it is incorrect, it is dangerous.
| turminal wrote:
| Because programmatically checked proofs don't work the way you
| think they do.
|
| And even programatically checked proofs are still programs. And
| programs still need code review.
| lostmsu wrote:
| I understand that you might need a code review for the proof
| to look nice, but the point of a checked proof is that you
| don't need a separate code review for correctness. If it
| checks (sans internal checker issues) - it is correct.
| CamperBob2 wrote:
| The slides that mahogany linked to [1] may give you some
| insight into this question. They're pretty interesting.
|
| [1]
| https://www.math.ias.edu/~vladimir/Site3/Univalent_Foundatio...
| eigenket wrote:
| Getting a proof, even in one that is amenable to computer
| verification into a form where you can actually verify it is a
| monumental task, like it can easily take multiple experts
| multiple years.
|
| Adding to this problem the intersection of mathematicians who
| understand computer aided verification and who understand a
| particular piece of research mathematics is often empty, and
| even if there are a handful of people who could do the
| necessary work they're often busy doing more interesting stuff.
|
| Finally I don't think a big proof that has been accepted by the
| community has ever been found to be false by programmatic
| methods. I think its unlikely that this will happen for a long
| time, for the simple reason that to feed a proof into a
| computer requires someone to dissect the proof to its logical
| bare bones in the first place, and the person doing the
| dissection is likely to notice any errors before they get round
| to feeding the problem into the computer.
| ouid wrote:
| > I don't think a big proof that has been accepted by the
| community has ever been found to be false by programmatic
| methods.
|
| Right, because that's not how it works. Errors aren't
| discovered after the proof has been translated into machine
| language because that is the only step of the verification.
| As soon as the code is written, the proof is verified.
| Necessarily, then, any error discovery has to happen before
| that.
| roywiggins wrote:
| > Why don't serious math journals require programmatically
| checked proofs for all publications
|
| Because mathematics has more or less worked fine for decades
| without it, and most mathematicians aren't programmers and
| aren't interested in it. Proofs are for understanding, not for
| technical correctness (though obviously the latter is very
| important). A good proof convinces someone that it's true;
| convincing a _computer_ that it 's true is probably
| tangentially useful, but not very interesting to most
| mathematicians. They mostly want to be expanding the frontiers
| of mathematical knowledge, not painstakingly encoding proofs
| that they already believe (and, crucially, that their _peers
| also already believe_ ) into a computer system.
|
| That's not to say proof checkers aren't interesting in their
| own right, but there's lots of really quite good reasons why
| mathematics goes without them almost all of the time. Formal
| methods bring a lot to computing, where our programs fall apart
| constantly, but mathematical proofs almost always _don 't_ fall
| apart. And in computing, _almost all_ programs don 't have
| formal proofs of correctness! If any area needs them, it's in
| computing, not mathematics. You might as well ask why every
| compiler doesn't come with a proof of correctness: because it's
| already hard enough to build a compiler, let alone the
| monstrosity that would be a formal proof.
|
| The added benefit to a computer-checked proof is usually small
| to nil, and it's a _ton_ of work. Like, way more work than just
| writing the proof.
| webmaven wrote:
| _> A good proof convinces someone that it 's true; convincing
| a computer that it's true is probably tangentially useful,
| but not very interesting to most mathematicians. They mostly
| want to be expanding the frontiers of mathematical knowledge,
| not painstakingly encoding proofs that they already believe
| into a computer system._
|
| I guess the question is which of "convincing a human" or
| "convincing a computer" is considered the higher bar.
|
| In any case, uncovering "what you believe that isn't so" is
| pretty important.
| roywiggins wrote:
| Right, but the ordinary methods of doing mathematics seem
| to be so good that the vast effort required to encode
| proofs well enough for a computer to check vastly dwarfs
| the benefit, at least for now.
|
| There's obviously been a lot of work done on proof
| assistants and maybe it will become easy and natural to
| formulate most proofs in a way a computer can check, but
| it's not anywhere close to that yet. The book explicating
| the proof discussed in the article is already 500 pages
| long, translating that into something a computer could
| check just isn't at all feasible yet.
| lostmsu wrote:
| This sounds like a very dangerous attitude, considering that
| making any tiny mistake will cause you to be able to prove
| any statement. Of all people mathematicians should understand
| that.
| roywiggins wrote:
| Empirically, proofs with bugs get ironed out into proofs
| with less bugs, and the results almost always end up
| standing. Why mathematical proof seems to be as good as it
| is at avoiding proving false statements is a matter of
| contention, but practically speaking it _works_. There 's
| very little incentive to computer check proofs because, as
| a practical matter, mathematics is not plagued with false
| "theorems."
|
| Computing, on the other hand, is plagued with broken
| programs, and formal methods are still a very specialized
| area, even though the potential benefits are much greater
| in computing than mathematics.
| lostmsu wrote:
| > mathematics is not plagued with false "theorems."
|
| But you don't know that.
|
| There certainly were precedents:
| https://mathoverflow.net/questions/291158/proofs-shown-
| to-be...
| roywiggins wrote:
| The answer given there by Manuel Eberl is more or less
| what I was trying to get at (and he is far more
| knowledgeable about it than I am). Lots of mathematicians
| find that the existing level of rigor in the field is
| enough.
|
| Obviously the people who like proof assistants disagree,
| and they may be right. It could be that modern theorems
| are just too complicated to be adequately evaluated by
| unassisted humans. But mathematicians have been getting
| along just fine without them for a long time, so "why
| aren't we requiring computer-checked proofs of
| everything" has a very straightforward answer:
| mathematicians writ large aren't convinced of the benefit
| yet.
| Koshkin wrote:
| More often than not mathematicians look for proofs of things
| they (and others) are already convinced in being true (or
| even already know for a fact). The automatic proof checking
| is just for that - making sure the proof is correct.
| HenryKissinger wrote:
| > Math is the most formal science there is.
|
| Can you prove that?
| robbedpeter wrote:
| Formality is technically the degree of mathiness applied to a
| domain, since formal logic and any other framework of
| formalization decomposes to mathematics.
|
| So... it's a tautology. Math is the most mathy math. Biology
| or psychology are less mathy maths.
|
| https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_.
| ..
|
| Having provablility holes in the formal system doesn't imply
| the existence of other systems that could be decomposed from
| currently known or unknown symbols. In this universe,
| information at the level of string theory seems to describe
| the limits, so math, as she is wrote, is as formal as it
| gets.
|
| The map is not the territory, so math, as a map, will always
| be less "formal" than reality itself. It's pretty firmly in
| second place, though.
| ttctciyf wrote:
| But if you only have maps and no territory, can you still
| be said to be doing cartography? :)
| doublerabbit wrote:
| I really wish I could get math to stick. I just finished my
| National 5 maths (rough equivalent of a US High School Diploma)
| night-class today and all I ever seem to understand is how, but
| not why. I'm the one asking "why is that that." And today was
| recapping on trinomial, simplifying fractions. Simple I expect to
| anyone with a mathematical mind, but to me, it's just an insane
| implosion which leaves me exhausted. I have notes, but even
| still.
|
| What is the secret behind it? How do people get so good with
| Maths? Is it truly study,study & study? .. I hadn't even herd of
| the terminology of co-efficient until today. Let alone something
| like this.
| curtisf wrote:
| High school math does not look much like research math.
|
| High school math is highly concerned with teaching 'algorithms'
| that compute answers, like long multiplication, long division,
| the quadratic formula, completing the square, synetic division,
| u-sub integration, ...
|
| Most mathematicians don't work with calculating things. Rather,
| they're more interested in _generally_ characterizing how
| objects behave (and _proving_ that that characterization is
| actually correct)
|
| For example, the definition of "continuous" you learn in pre-
| calc is probably something like: if you compute a function's
| limit, you get the same thing as computing the function.
|
| The topologist's definition is that the function's inverse map
| preserves openness.
|
| These are basically unrecognizable as being the same thing, but
| in fact they are. And the second doesn't actually reference
| calculation or even anything identifiable as numbers!
|
| I believe this is a significant cause of not understanding
| "why" despite understanding "how". As an example, I remember in
| pre-calculus we spent at least an entire unit on partial-
| fraction-decomposition. Partial fraction decomposition is a
| strange algebraic manipulation that is basically only important
| to integrating rational functions -- something you don't do
| until at least a year later, and which doing by hand is
| basically pointless because computers are better at it than
| humans.
|
| They also require very different kinds of intuition. Building
| intuition is hard, but important for people studying math.
| Working through many different examples and problems and
| applications can help you build intuition.
|
| But I don't really have any advice for how to make math stick
| better, but just want to share that the mathematics you learn
| in high school is not representative of the entire field.
|
| If you ever want to get a taste of "higher math" without having
| lots of prereqs, point-set topology and group theory are two
| (very different!) fields that are also fairly accessible.
| Pyramus wrote:
| Very good example re continuity!
|
| I fully agree that (high) school math is mostly calculus and
| very different from university/research math. I would
| recommend parent to start exploring what math actually is,
| which fields there are and where they come from, what some
| famous open problems are and what beauty means to
| mathematicians.
|
| Good popular maths books are e.g. The Music of the Primes by
| Marcus du Sautoy [1] or Fermat's Last Theorem by Simon Singh
| [2].
|
| It's also important to note that there are no shortcuts - it
| takes at least 1-2 years of studying maths to start to
| understand the why.
|
| [1] https://en.wikipedia.org/wiki/The_Music_of_the_Primes
|
| [2]
| https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem_(book)
| gjm11 wrote:
| Just for fun, an example of an application of partial
| fraction decomposition that _doesn 't_ involve taking
| integrals.
|
| Consider the Fibonacci numbers 0, 1, 1, 2, 3, 5, 8, ... (with
| the convention that 0 and 1 are the 0th and 1st). Here's one
| way to get an explicit formula for them.
|
| First, we package them up into what's called a "generating
| function": 0 x^0 + 1 x^1 + 1 x^2 + 2 x^3 + 3 x^4 + 5 x^5 +
| ... or, more concisely, the sum of F_n x^n over all
| nonnegative integers n. Call this thing F(x).
|
| What happens if we add up F(x) and x F(x)? Well, x F(x) is
| just the same sum but with all the numbers shifted over by 1.
| It's the sum of x F_n x^n, or the sum of F_n x^(n+1), or the
| sum of F_(n-1) x^n, with the proviso that there isn't an x^0
| term. So when we add this to the sum of F_n x^n, we get the
| sum of [F_(n-1) + F_n] x^n, with again the proviso that the
| x^0 term is just F_0 x^0 = 0.
|
| But, aha!, F_(n-1)+F_n is just F_(n+1), at least when n >= 1.
| So this thing equals the sum of F_(n+1) x^n, _except_ that
| once again we need to be careful about n=0; more precisely
| what we have is the sum of F_(n+1) x^n, minus 1. (Because F_1
| is 1, but the actual x^0 term is 0.)
|
| And the sum of F_(n+1) x^n equals the sum of F_n x^(n-1), or
| F(x)/x.
|
| In other words, we have F(x) + x F(x) = F(x)/x - 1, or
| equivalently (1 - x - x^2) F(x) = x, or equivalently F(x) = x
| / (1 - x - x^2).
|
| This is already pretty neat -- who would have thought that
| that big sum, involving the Fibonacci numbers which are
| defined in terms of a sort of recursive-adding process, would
| come out to something so simple?
|
| Now comes the coup de grace, which is the bit that uses
| partial fractions. We can write that denominator as (1-ax)
| (1-bx) for some numbers a,b. We'll figure out what a,b
| actually are in a moment -- that's just solving a quadratic
| equation. For now, looking at the coefficients of x and x^2
| we note that a+b=1 and ab=-1. And now we can do the partial-
| fractions thing. We know that we'll have x/(1-x-x^2) =
| p/(1-ax)+q/(1-bx) for some p,q. That means that x =
| p(1-bx)+q(1-ax), so p+q=0 and pb+aq=-1 or p=1/(a-b). That is,
| F(x) = 1/(a-b) [1/(1-ax) - 1/(1-bx)].
|
| Why bother doing this? Because the fractions we _now_ have
| are easy to expand in powers of x. We have 1 /(1-ax) = 1 + ax
| + a^2x^2 + ... and 1/(1-bx) = 1 + bx + b^2x^2 + ... So F_n,
| the coefficient of x^n in F(x), is just 1/(a-b) [a^n - b^n].
|
| Finally, we ought to work out what a,b actually are. It turns
| out that a = [1+sqrt(5)]/2 and b = [1-sqrt(5)]/2; that is, a
| is the famous "golden ratio" sometimes called phi, and b =
| 1-a = -1/a. So a-b is just sqrt(5).
|
| So: the nth Fibonacci number is exactly (phi^n - (-1/phi)^n)
| / sqrt(5). Isn't that nice?
|
| (Since |b|<1, the term b^n/sqrt(5) is always smaller than 1/2
| in absolute value, which means that F_n is also always the
| _closest integer_ to phi^n /sqrt(5).)
|
| For the avoidance of doubt, none of this has anything at all
| to do with the disc embedding theorem. I just thought it was
| worth clarifying that partial fractions aren't _just_ a one-
| trick pony used only for integrating rational functions;
| there are other situations in which representing something in
| terms of partial fractions gives useful insights into its
| behaviour.
| leephillips wrote:
| Very nice. This is similar to Donald Knuth's treatment in
| SS1.2.8 of v.I of _The Art of Computer Programming_
| (Fundamental Algorithms).
| mathgenius wrote:
| It's not just about memorizing stuff.. I think you need to find
| a different teacher, if you really are keen to keep going.
___________________________________________________________________
(page generated 2021-09-09 23:00 UTC)