[HN Gopher] Fermat's Last Theorem - how it's going
___________________________________________________________________
Fermat's Last Theorem - how it's going
Author : verbify
Score : 227 points
Date : 2024-12-12 14:30 UTC (8 hours ago)
(HTM) web link (xenaproject.wordpress.com)
(TXT) w3m dump (xenaproject.wordpress.com)
| TechPlasma wrote:
| 4
| dboreham wrote:
| 42 surely?
| olddustytrail wrote:
| I can only count to 4.
| munchler wrote:
| > Lean did that irritating thing which it sometimes does: it
| complained about the human presentation of an argument in the
| standard literature, and on closer inspection it turned out that
| the human argument left something to be desired.
|
| Tongue-in-cheek irritation aside, this is actually awesome. I
| think Lean (and other theorem provers) are going to end up being
| important tools in math going forward.
| FredPret wrote:
| Compilers have this very same habit!
| munchler wrote:
| Yes. In fact, Lean is a compiler and type checkers are
| theorem provers (by the Curry-Howard correspondence). Proofs
| are programs!
| dboreham wrote:
| And Mathematics is Computer Science. Took me most of a
| lifetime to realize this.
| pkoird wrote:
| I'd argue for the converse: Computer Science is
| Mathematics
| practal wrote:
| Let's just say there is a really big overlap between the
| two. And each one can learn from the point of view of the
| other.
| gedpeck wrote:
| There are statements provably true about the natural
| numbers that can't be proven in first order PA. Are such
| statements part of computer science? If so, how?
| jerf wrote:
| That question contains so many false or unnecessary
| assumptions that it would take far longer to unpack them
| than it took you to type them, so I will limit myself to
| the observation that we do not even _remotely_ confine
| ourselves to first order anything in computer science,
| nor should we.
| gedpeck wrote:
| What false assumption did I make? I just pointed out a
| fact and asked a question. How many false assumptions
| could I have made with just one statement of fact and two
| questions? If you don't have a valid answer to the
| question then don't respond.
|
| I'm a mathematician and not a computer scientist. The
| first order PA axioms are recursively enumerable. Hence
| it's clearly something of interest to computer
| scientists. The second order PA axioms aren't so...are
| they part of computer science? What do computer
| scientists think about proofs in second order PA? There
| are no computable models of ZFC so wouldn't it be the
| case that while computer scientists deal with ZFC that
| ZFC isn't part of computer science? what is your
| definition of computer science? Physicists deal with a
| vast amount of mathematics but math isn't physics. In the
| same way mathematics isn't computer science.
|
| Overall I think most mathematicians would not consider
| mathematics as part of computer science.
| immibis wrote:
| why should computer science be limited to things that are
| first order PA?
| Vt71fcAqt7 wrote:
| >it would take far longer to unpack them than it took you
| to type them
|
| Why is this a barometer for whether to answer a question
| or unpack assumptions?
| pennomi wrote:
| I'm happy to see so much work going into fixing human errors in
| mathematics in a rigorous way.
| seanwilson wrote:
| > This story really highlights, to me, the poor job which humans
| do of documenting modern mathematics. There appear to be so many
| things which are "known to the experts" but not correctly
| documented. The experts are in agreement that the important ideas
| are robust enough to withstand knocks like this, but the details
| of what is actually going on might not actually be where you
| expect them to be. For me, this is just one of many reasons why
| humans might want to consider getting mathematics written down
| properly, i.e. in a formal system, where the chances of error are
| orders of magnitude smaller. However most mathematicians are not
| formalists, and for those people I need to justify my work in a
| different way. For those mathematicians, I argue that teaching
| machines our arguments is a crucial step towards getting machines
| to do it themselves. Until then, we seemed to be doomed to fix up
| human errors manually.
|
| It makes me think about how you get UI/UX/web designers who make
| (informal, imprecise) mockups and prototypes of their design
| ideas and interaction flows, they then hand these off to a
| developer to do the coding (formalise it, explain it precisely to
| the machine), and on the way the developer will inevitably find
| problems/holes in the designs (like an interaction scenario or
| code path not considered in the design, which could uncover a
| major design flaw), which the developer and/or the designer will
| have to patch somehow.
|
| As in, design and development are different roles and require
| different mindsets, and most designers have a very strong
| resistance against working and thinking like a developer.
| coliveira wrote:
| > consider getting mathematics written down properly, i.e. in a
| formal system
|
| This was already tried, and failed (Hilbert). In the aftermath
| of the failure we learned that mathematics cannot be completely
| formalized. So this points to a fundamental problem with using
| AI to do math.
| hansvm wrote:
| TFA isn't about AI; it's about taking the same arguments
| mathematicians are already writing down semi-formally with a
| hand-wavy argument that it could be made completely formal,
| and instead/also writing them down formally.
|
| In cases where the Hilbert machine applies, the mathematician
| writing down the semi-formal argument already has to state
| the new axioms, justify them, and reason about how they
| apply, and the proposal would just take that "reason about
| how they apply" step and write it in a computer-verifiable
| way.
| ur-whale wrote:
| > This was already tried, and failed (Hilbert)
|
| Who likely underestimated how hard it would be.
|
| And who didn't have a 2024 computer on hand.
| jmpetroske wrote:
| The reason Hilbert failed is because it's mathematically
| impossible (Godels incompleteness theorem).
| ur-whale wrote:
| > because it's mathematically impossible
|
| I guess the Lean devs can all pack and go home then.
| Also: how could they possibly miss Godel's theorem, how
| careless of them.
| jmpetroske wrote:
| Look, you're taking my response out of context. I'm just
| stating that the reason Hilbert failed is not that he
| didn't have a computer. Lean devs aren't trying to do
| what Hilbert was.
| lambdaone wrote:
| While it's true that mathematics cannot ever be _completely_
| formalized, per Godel 's incompleteness theorems, a huge
| amount of it obviously can, as is demonstrated by the fact
| that we can write it down in a reasonably rigorous way in a
| combination of conventional mathematical notation and plain
| English.
|
| Nor is this in any way "a fundamental problem with using AI
| to do math".
| tialaramex wrote:
| The other end of the Church-Turing intuition is that if we
| couldn't possibly do it then the machines can't do it
| either. What they're doing is the same, it's not only not
| magically _less powerful_ it 's also not not magically
| _more powerful_.
|
| You seem to be imagining that Godel is just a small problem
| you can carve off and ignore, but it applies for everything
| powerful enough to do _arithmetic_.
|
| Of course we can have the AI muddle along with intuition,
| but that's not actually an improvement, our mathematicians
| can already use intuition and they've got a lot more
| experience.
| youoy wrote:
| It's a fundamental problem with using AI to do "intuitive"
| math, but not a fundamental problem with AI to do formal
| math.
|
| As you have stared, a lot of mathematics can be formalized.
| For me the problem for AI with math is going to be the
| generative part. Validating a proof or trying to come up
| with a proof for a given statement may be within reach.
| Coming up with meaningful/interesting statements to proof
| is another completely different story.
| lisper wrote:
| There are two problems with this.
|
| First, writing something down in English is very different
| from formalizing it. Natural language interacts with human
| brains in all kinds of complicated ways that we do not
| fully understand, so just because we can make a compelling-
| sounding argument in English doesn't mean that argument
| doesn't have holes somewhere.
|
| Second, the incompleteness theorems apply only to given
| formal systems, not to formality in general. Given a
| system, you can produce a Godel sentence _for that system_.
| But any Godel sentence for a given system has a proof in
| some other more powerful system. This is trivially true
| because you can always add the Godel sentence for a system
| as an axiom.
|
| This is a very common misconception about math even among
| mathematicians. Math produces only conditional truths, not
| absolute ones. All formal reasoning has to start with a set
| of axioms and deduction rules. Some sets of axioms and
| rules turn out to be more useful and interesting than
| others, but none of them are True in a Platonic sense, not
| even the Peano axioms. Natural numbers just happen to be a
| good model of certain physical phenomena (and less-good
| models of other physical phenomena). Irrational numbers and
| complex numbers and quaternions etc. etc. turn out to be
| good models of other physical phenomena, and other
| mathematical constructs turn out not to be good models of
| anything physical but rather just exhibit interesting and
| useful behavior in their own right (elliptic curves come to
| mind). None of these things are True. At best they are
| Useful or Interesting. But all of it is formalizable.
| zozbot234 wrote:
| Hilbert's program assumed math to be completely decidable,
| i.e. that for every possible statement, a decision procedure
| could tell whether it was true or false. We know now that no
| such procedure can possibly exist. But merely writing down
| all _known_ math in a formal system is completely doable. (Of
| course, formalization systems can be used to state any
| additional axioms that may be required of any given argument
| or development, so the original concerns about incompleteness
| are just not very relevant.)
| bee_rider wrote:
| This hurts my engineer-brain. Just a reminder than the gap
| between us and mathematicians is about the same as the gap
| between us and everybody else (edit: in terms of math skills),
| just in the other direction, haha.
|
| Oh well, hopefully when they get the machines to solve math,
| they'll still want them to run a couple percents faster every
| year.
| Etheryte wrote:
| This is an unhealthy way to milk your ego. "Gap between us and
| everybody else"? Just because your job makes more money than
| some other people doesn't make you smarter or better in any
| way.
| analog31 wrote:
| While true, there are some clues such as the correlation
| between someone's occupation and their IQ.
| bee_rider wrote:
| I just meant in the context of math skills, but I definitely
| acknowledge that it was poor phrasing on my part.
| dogboat wrote:
| On the other hand get them to try and debug code. And then
| you might realize the redicilous amount of unwritten lore
| that software engineering has. OTOH software engineering
| has courses but you need to work with people to get good
| not just read books. Like math.
| TechDebtDevin wrote:
| I don't even understand how you interpreted that to have
| anything to do with being "smarter or making more money". Get
| some more coffee dude.
| dogboat wrote:
| Maybe less coffee :)
| ur-whale wrote:
| > about the same as the gap between us and everybody else
|
| My bet: there is a very long list of skills for which the gap
| between you and a person randomly picked out the whole
| population is very large, and not in the direction you seem to
| think.
| bee_rider wrote:
| That's true. It was poor phrasing on my part. But I do
| realize I'm really bad at a ton of things. I just meant in
| the math context.
| boothby wrote:
| Just a gentle reminder from somebody with a foot in each camp:
| computers were invented by mathematicians to accelerate
| computation. The field of Computer Science is a child of the
| field of Mathematics. Computer Engineering is its grandchild.
|
| And yes, the desire for formalism has been stronger in the
| descendants than the ancestors for some time, but we're seeing
| that gap close in real time and it's truly exciting.
|
| And, lets keep pushing the frontier of computation! We aren't
| just making better ad-delivery devices.
| enugu wrote:
| Don't think you should be intimidated just by reading the
| article itself. It _is_ using several domain specific terms.
| But you would encounter that in many other contexts - for
| instance a group discussing an intricate board game you are
| seeing for the first time.
|
| However, unlike board games where the concepts can be explained
| to you in a few minutes (usually it becomes clear only when you
| play), a lot mathematics especially algebraic geometry/number-
| theory can have many layers of dependencies which takes years
| to absorb.
|
| It would be interesting to compare it to understanding a large
| open source project like the Linux kernel, well enough to
| contribute. I would say it is not so conceptually deep as
| mathematics of the article (while still having a few great
| ideas). But understanding the source would require
| familiarization with 'tedious details' which incidentally, what
| this article is also about.
|
| So the issue, stated this way, is not so much raw talent as
| time and effort. This leads to the topic of motivation -
| finding something great in an idea can lead to investment in
| the subject. For those more talented, the journey might be
| easier.
|
| Alan Kay's maxim is crucial - a change of perspective can
| simplify things relative to 80 extra IQ points. A long sequence
| of technical steps can be compressed into a single good
| intuitive definition/idea like the complexity of navigating a
| path becomes clear from a higher viewpoint.
|
| Grothendieck, considered by many to be the best mathematician
| of the past century, made the point that there were several
| colleagues who were more proficient at symbolic manipulation.
| But, he was able to look at things with a more foundational
| perspective and could make great contributions.
|
| Here's a good essay by Thurston "Proof and Progress in
| mathematics". https://arxiv.org/pdf/math/9404236
|
| He discusses this problem of being intimidated by jargon.
| lincolnq wrote:
| If you're interested in this stuff at all, check out some code.
|
| Example:
| https://github.com/ImperialCollegeLondon/FLT/blob/main/FLT/M...
|
| Also check out the blueprint, which describes the overall
| structure of the code:
|
| https://imperialcollegelondon.github.io/FLT/blueprint/
|
| I'm very much an outside observer, but it is super interesting to
| see what Lean code looks like and how people contribute to it.
| Great thing is that there's no need for unittests (or, in some
| sense, the final proof statement _is_ the unittest) :P
| staunton wrote:
| Most (larger) Lean projects still have "unit tests". Those
| might be, e.g., trivial examples and counter examples to some
| definition, to make sure it isn't vacuous.
| schoen wrote:
| That's such a nice way to think about unit tests! (In this
| context and others.)
| ur-whale wrote:
| This article makes me very happy.
|
| One of the many reasons I love math is the feeling I've always
| had that with it, we're building skyscraper-sized mental
| structures that are built on provably indestructible foundations.
|
| With the advent of "modern" proofs, which involve large teams of
| Mathematicians who produce proof that are multiple hundreds of
| pages long and only understandable by a very, very tiny sliver of
| humanity (with the extreme case of Shinichi Mochizuki [1] where
| N=1), I honestly felt that math had lost its way.
|
| Examples: graph coloring theorem, Fermat's last theorem, finite
| group classification theorem ... all of them with gigantic
| proofs, out of reach of most casual observers.
|
| And lo and behold, someone found shaky stuff in a long proof,
| what a surprise.
|
| But it looks like some Mathematicians feel the same way I do and
| have decided to do something about it by relying on the
| computers. Way to go guys !
|
| [1] https://en.wikipedia.org/wiki/Shinichi_Mochizuki
| Retr0id wrote:
| This makes me wonder if there'll ever be a significant bug
| discovered in Lean itself, breaking past formalisation work.
| pkoird wrote:
| Ahh, so maybe we should first focus on formalizing Lean itself?
| moomin wrote:
| For the problem to affect proofs, it would have to be in the
| type checker, and the type system isn't really that complex.
| The good news is that every single user of Lean checks this
| works every day. Finding a flaw that literally everyone relies
| upon and doesn't notice is pretty implausible.
| munchler wrote:
| Lean has a small "kernel" that has been independently checked
| multiple times, and the rest of Lean depends on the kernel. A
| soundness bug is still possible, but pretty unlikely at this
| point.
|
| https://lean-lang.org/lean4/doc/faq.html
| vouaobrasil wrote:
| > The experts are in agreement that the important ideas are
| robust enough to withstand knocks like this, but the details of
| what is actually going on might not actually be where you expect
| them to be.
|
| Past researcher in pure math here. The big problem is that
| mathematicians are notorious for not providing self-contained
| proofs of anything because there is no incentive to do so and
| authors sometimes even seem proud to "skip the details". What
| actually ends up happening is that if you want a rigorous proof
| that can be followed theoretically by every logical step, you
| actually need an expert to fill in a bunch of gaps that simply
| can't easily be found in the literature. It's only when such a
| person writes a book explaining everything that it might be
| possible, and sometimes not even then.
|
| The truth is, a lot of modern math is on shaky ground when it
| comes to stuff written down.
| uffjedn wrote:
| Studied math a long time ago and one of my profs was proud
| about not going into the details. He said "Once you did
| something 100 times, you can go and say -as easily observable-
| and move on."
| brobdingnagians wrote:
| I loved that some of the proof solutions in the back of
| mathematical logic book said, "Observe that ..." as the start
| of the proof. Our little study group definitely did not see
| how what followed was an observation we would have made.
| moomin wrote:
| Baker's little book of number theory made me work to get
| through every page...
| inglor_cz wrote:
| I was an algebra major at the turn of the century and I hated
| that attitude.
|
| For the prof, yeah, easily observable. What about the
| students who try to absorb that particular article? You
| already have to balance the main topic in your brain, and you
| get these extra distractions on top of it.
| hyperthesis wrote:
| There'a a story that when someone wrote up a famous
| mathematician's work (Euler?), he found many errors, some quite
| serious. But all the theorems were true anyway.
|
| Sounds like Tao's third stage, of informed intuition.
| pfdietz wrote:
| Back in Euler's time there as a lot of informality. The rigor
| of the second half of the 19th century was still some time in
| the future.
| impendia wrote:
| Speaking as a current researcher in pure math -- you're right,
| but I don't think this is easily resolved.
|
| Math research papers are written for other specialists in the
| field. Sometimes too few details are provided; indeed I
| commonly gripe about this when asked to do peer review; but to
| truly provide _all_ the details would make papers far, far
| longer.
|
| Here is an elementary example, which could be worked out by
| anyone with a good background in high school math. Prove the
| following statement: there exist constants C, X > 0 such that
| for every real number x > X, we have
|
| log(x^2 + 1) + sqrt(x) + x/exp(sqrt(4x + 3)) < Cx.
|
| i.e. the left-side is big-O of the right.
|
| This flavor of statement comes up all the time in analytic
| number theory (my subject), would be regarded as obvious to any
| specialist, and in a paper would be invariably stated without
| proof.
|
| To come up with a complete, rigorous proof would be _long_ ,
| and not interesting. No professional would want to read one.
|
| I agree this attitude comes with some tradeoffs, but they do
| seem manageable.
| vouaobrasil wrote:
| > but to truly provide all the details would make papers far,
| far longer.
|
| I think what we need in math is that eventually, a collection
| of frequently cited and closely related papers should be
| rewritten into a much longer book format. Of course some
| people do that, but they hardly get much credit for it. It
| should be much more incentivated, and a core part of grant
| proposals.
| ykonstant wrote:
| The example you gave, however, is obvious to every graduate
| student in any field that touches analysis or asymptotics.
| That is not the real problem; the real problem is proof by
| assertion of proof: "Lemma 4.12 is derived by standard
| techniques as in [3]; so with that lemma in hand, the theorem
| follows by applying the arguments of Doctorberg [5] to the
| standard tower of Ermegerds."
|
| Too many papers follow this pattern, especially for the more
| tedious parts. The two problems are that it makes the lives
| of students and postdocs torture, and that the experts tend
| to agree without sufficient scrutiny of the details (and the
| two problems are intrinsically connected: the surviving
| students are "trained" to accept those kinds of leaps and
| proliferate the practice).
|
| Frustratingly, I am often being told that this pattern is
| necessary because otherwise papers will be enormous---like I
| am some kind of blithering idiot who doesn't know how easily
| papers can explode in length. Of course we cannot let papers
| explode in length, but there are ways to tame the length of
| papers without resorting to nonsense like the above. For
| example, the above snippet, abstracted from an actual paper,
| can be converted to a proof _verifiable by postdocs_ in two
| short paragraphs with _some effort_ (that I went through).
|
| The real motive behind those objections is that authors would
| need to take significantly more time to write a proper paper,
| and even worse, we would need _actual editors_ (gasp!) from
| journals to perform non-trivial work.
| impendia wrote:
| I strongly agree with you, in the sense that many papers
| provide far fewer details than they should, and reading
| them is considered something of a hazing ritual for
| students and postdocs. (I understand that this is more
| common in specialties other than my own.)
|
| The blog post seems to be asserting a rather extreme point
| of view, in that even the example I gave is (arguably!)
| unacceptable to present without any proof. That's what I'm
| providing a counterpoint to.
| ykonstant wrote:
| > (I understand that this is more common in specialties
| other than my own.)
|
| True, analytic number theory does have a much better
| standard of proof, if we disregard some legacy left from
| Bourgain's early work.
| zellyn wrote:
| I think Kevin Buzzard et al are aiming for a future where
| big, complicated proofs not accompanied by code are
| considered suspicious.
|
| I wonder if being able to drill all the way down on the
| proof will alleviate much of the torture you mention.
| joe_the_user wrote:
| _The two problems are that it makes the lives of students
| and postdocs torture, and that the experts tend to agree
| without sufficient scrutiny of the details (and the two
| problems are intrinsically connected: the surviving
| students are "trained" to accept those kinds of leaps and
| proliferate the practice)._
|
| I think this practice happens in many specialized fields.
| The thing with math is that the main problem is the
| publications become inaccessible. But when you have the
| same sort of thing in a field where the formulations
| assumptions that aren't formalizable but merely "plausible"
| (philosophy or economics, say), you have these assumptions
| introduced into the discussion invisibly.
| adrian_b wrote:
| Using vague terms like "obvious" or "standard techniques"
| is doubtless wrong, but I would not see any problem in a
| paper basing its conclusions on demonstrations from a
| source listed in the bibliography, except that in many
| cases those who read the paper are unable to obtain access
| to the works from the bibliography.
|
| Even worse is when the bibliography contains abbreviated
| references, from which it is impossible to determine the
| title of the quoted book or research paper or the name of
| the publication in which it was included, and googling does
| not find any plausible match for those abbreviations.
|
| In such cases it becomes impossible to verify the content
| of the paper that is read.
| zozbot234 wrote:
| > there exist constants C, X > 0 such that for every real
| number x > X, we have
|
| >> log(x^2 + 1) + sqrt(x) + x/exp(sqrt(4x + 3)) < Cx.
|
| These problems are only "uninteresting" to the extent that
| they can be "proven" by automated computation. So the
| interesting part of the problem is to write some completely
| formalized equivalent to a CAS (computer algebra system -
| think Mathematica or Maple, although these are _not_ at all
| free from errors or bugs!) that might be able to dispatch
| these questions easily. Formalization systems can already
| simplify expressions in rings or fields (i.e. do routine
| school-level algebra), and answering some of these questions
| about limits or asymptotics is roughly comparable.
| impendia wrote:
| The problems are uninteresting in the sense that the
| audience for these papers doesn't find them interesting.
|
| In particular, I'm not making any sort of logical claim --
| rather, I know many of the people who read and write these
| papers, and I have a pretty good idea of their taste in
| mathematical writing.
| zozbot234 wrote:
| Well, you did claim that the problem could be worked out
| with simply high school math. It would certainly be
| 'interesting' if such problems could _not_ be answered by
| automated means.
|
| (For example, I think many mathematicians would find the
| IMO problems interesting, even though the statements are
| specifically chosen to be understandable to high-school
| students.) The problem of _how_ to write an automated
| procedure that might answer problems similar to the one
| you stated, and what kinds of "hints" might then be
| needed to make the procedure work for any given instance
| of the problem, is also interesting for similar reasons.
| brap wrote:
| >Here is an elementary example, which could be worked out by
| anyone with a good background in high school math
|
| WTF
|
| >i.e. the left-side is big-O of the right.
|
| Oh.
| dogboat wrote:
| I didn't learn big O at high school though.
| rstuart4133 wrote:
| > I don't think this is easily resolved.
|
| It is technically resolvable. Just insist on Lean (or some
| other formal verifier) proofs for everything.
|
| It looks to me like math is heading that way, but a lot of
| mathematicians will have to die before its the accepted
| practice.
|
| Mathematicians who are already doing formal proofs are
| discovering it have the same properties as shared computer
| code. Those properties have have lead to most of the code
| executed on the planet being open source source, despite the
| fact you can't make money directly from it.
|
| Code is easy to share, easy to collaborate on, and in the
| case of formal proofs easy to trust. It is tedious to write,
| but collaboration is so easy that publishing your 1/2 done
| work will often prompt others to do some of the tedious
| stuff. Code isn't self documenting unless it's very well
| written, but even horrible code is far better at documenting
| what it does than what you are describing.
| boothby wrote:
| This reminds me of a fun experience I had in grad school. I was
| working on writing some fast code to compute something I can no
| longer explain, to help my advisor in his computational approach
| to the Birch and Swinnerton-Dyer conjecture. I gave a talk at a
| number theory seminar a few towns over, and was asked if I was
| doing this in hopes of reinforcing the evidence behind the
| conjecture. I said with a grin, "well, no, I'd much rather find a
| counterexample." The crowd went _wild_ ; I've never made a group
| of experts so angry as that day.
|
| Well, I never was much of a number theorist. I never did come to
| understand the basic definitions behind the BSD conjecture.
| Number theory is so old, so deep, that writing a PhD on the topic
| is the step one takes to become a novice. Where I say that I
| didn't understand the definitions, I certainly knew them and
| understood the notation. But there's a depth of intuition that I
| never arrived at. So the uproar of experts, angry that I had the
| audacity to hope for a counterexample, left me more curious than
| shaken: what do they see, that they cannot yet put words to?
|
| I am delighted by these advances in formalism. It makes the field
| feel infinitely more approachable, as I was a programmer long
| before I called myself a mathematician, and programming is still
| my "native tongue." To the engineers despairing at this story,
| take it from me: this article shows that our anxiety at the
| perceived lack of formalism is justified, but we must remember
| that anxiety is a feeling -- and the proper response to that
| feeling is curiosity, not avoidance.
| bell-cot wrote:
| > The crowd went wild; I've never made a group of experts so
| angry as...
|
| Also not a number theorist...but I'd bet those so-called
| experts had invested far, far too many of their man-years in
| that unproven conjecture. All of which effort and edifice would
| collapse into the dumpster if some snot-nosed little upstart
| like you, using crude _computation_ , achieved overnight fame
| by finding a counter-example.
|
| (If I could give my many-decades-ago younger self some advice
| for math grad school, one bit of that would be: For any non-
| trivial "Prove X" assignment, start by spending at least 1/4 of
| my time budget looking for counter-examples. For academic
| assignments, that's 99% likely to fail. But the insight you'll
| get into the problem by trying will be more worth it. And the
| other 1% of the time you'll look like a genius. And - as soon
| as you attempt _real_ math research, those odds shift
| enormously, in favor of the counterexample-first approach.)
| isotypic wrote:
| > All of which effort and edifice would collapse into the
| dumpster
|
| Except it wouldn't, because the work towards the BSD would
| still be right and applicable to other problems. If someone
| proved the Riemann hypothesis false, all of our math (and
| there is a lot of it) surrounding the problem isn't
| immediately made worthless. The same is true for any
| mathematical conjecture.
|
| I don't doubt the rest of your comment might have played a
| role, however.
| tux3 wrote:
| Then again it might just be a misunderstanding. And the so-
| called experts, in aggregate, turn out to be right more often
| than not.
|
| Say I show up to a physics conference and proclaim that I
| hope my computational effort will disprove some major
| physical law. Well, you better have a good delivery, or the
| joke might not land well!
|
| Sometimes people take things a little too literally after
| attending hours of very dry talks at serious seminars. I
| wouldn't read too much into it.
| mostly_a_lurker wrote:
| > Also not a number theorist...but I'd bet those so-called
| experts had invested far, far too many of their man-years in
| that unproven conjecture. All of which effort and edifice
| would collapse into the dumpster if some snot-nosed little
| upstart like you, using crude computation, achieved overnight
| fame by finding a counter-example.
|
| Are you maybe confusing math academia for psychology or
| social sciences? There is no replication crisis in math, no
| house of cards of self-proclaimed experts riding on bullshit.
| Mathematicians are _actually experts_ at a deep and extremely
| rigorous technical field -- many of them are even experts at
| computational approaches to problems! -- and when outsiders
| and upstarts resolve old conjectures, mathematicians
| generally react by celebrating them and showering them with
| fame, job offers and gushing articles in Quanta.
| baruz wrote:
| > no house of cards
|
| As I understand TFA, from a formalist's perspective, this
| is not necessarily the case. People were building on
| swathes of mathematics that seem proven and make intuitive
| sense, but needed formal buttressing.
|
| > _actually experts_ at a deep and rigorous technical field
|
| Seeing as the person you're addressing was a mathematics
| graduate student, I'm sure they know this.
| bell-cot wrote:
| Yep. Here's an easy-looking one, that lasted just under 2
| centuries (quoting Wikipedia) -
|
| > In number theory, Euler's conjecture is a disproved
| conjecture related to Fermat's Last Theorem. It was
| proposed by Leonhard Euler in 1769. It states that for
| all integers n and k greater than 1, if the sum of n many
| kth powers of positive integers is itself a kth power,
| then n is greater than or equal to k...
|
| > ...
|
| > Euler's conjecture was disproven by L. J. Lander and T.
| R. Parkin in 1966 when, through a direct computer search
| on a CDC 6600, they found a counterexample for k = 5.[3]
| This was published in a paper comprising just two
| sentences.[3]
|
| > [3] - Lander, L. J.; Parkin, T. R. (1966).
| "Counterexample to Euler's conjecture on sums of like
| powers". Bull. Amer. Math. Soc. ...
| mostly_a_lurker wrote:
| > Seeing as the person you're addressing was a
| mathematics graduate student, I'm sure they know this.
|
| The OP (u/boothby) was not the person I was addressing
| (u/bell-cot).
| baruz wrote:
| Does this not imply that /u/bell-cot had been a graduate
| student in mathematics?
|
| > If I could give my many-decades-ago younger self some
| advice for math grad school
| brendanyounger wrote:
| To put a little color on the BSD conjecture, it states that
| the rank (0, 1, 2, 3, etc.) of rational points on an elliptic
| curve is related to the residue (coefficient of 1/q) of the
| L-function for the curve. There are some additional
| multiplicative factors, in particular the size of the Tate-
| Shafarevich group.
|
| No one knows how to compute the size of that group in general
| (in fact no one has proved that it's finite!). Computing the
| rank of a curve via non-analytic means is more akin to a
| bespoke proof than a straightforward computation (see Noam
| Elkies' work).
|
| So saying you're going to disprove BSD with blind computation
| is rather naive unless you're sitting on several career-
| defining proofs and not sharing them.
| williamstein wrote:
| If the BSD rank conjecture were false, then the simplest
| counterexample might be an elliptic curve with algebraic
| rank 4 and analytic rank 2. This could be established for a
| specific curve by rigorously numerically computing the
| second derivative of the L-series at 1 to some number of
| digits and getting something nonzero (which is possible
| because elliptic curves are modular - see work of
| Dikchitser). This is a straightforward thing to do
| computations about and there are large tables of rank 4
| curves. This is also exactly the problem I suggested to the
| OP in grad school. :-)
|
| In number theory doing these sorts of "obvious
| computational investigations" is well worth doing and led
| to many of the papers I have written. I remember doing one
| in grad school and being shocked when we found a really
| interesting example in minutes, which led to a paper.
| boothby wrote:
| > All of which effort and edifice would collapse into the
| dumpster if some snot-nosed little upstart like you, using
| crude computation, achieved overnight fame by finding a
| counter-example.
|
| Not _at all_. In fact, if I had found a counterexample, it
| would cause a _flurry_ of new research to quantify exactly
| how wrong the BSD conjecture is. Such a finding would
| actually be a boon to their career! That 's why my response
| is curiosity, and not to sneer at them for protecting their
| extremely secure tenured careers.
|
| Edit 1: And if you think you've found a counterexample to a
| long-standing conjecture with a computation, you'd better be
| _damned sure_ that your computation is correct before opening
| your mouth in public. And that takes a _ton_ of work in the
| case of the BSD conjecture, because you 've almost certainly
| identified a bug in the extremely complex code underlying
| that computation. If I ever thought I was holding onto such a
| counterexample, I'd approach a human calculator like Ralph
| Greenberg as my first step (after internal checks: re-running
| the code on another computer to rule out cosmic bit flips,
| and perhaps running more naive, unoptimized implementations).
|
| Edit 2: This attitude pervades my software development
| career, and I've brought it to my foray into superconducting
| circuit design: a bug report brings _joy_ to my life, and I
| aim to shower the reporter with praise (which may involve
| chocolate). There is nothing more satisfying than being
| proven wrong, because it helps us collectively move toward
| greater truths.
| setopt wrote:
| > my foray into superconducting circuit design
|
| Curious, what do you work on? (I also research
| superconductivity.)
| graycat wrote:
| This thread seems to be about good writing for math.
|
| Okay, for some decades, I've read, written, taught, applied, and
| published, in total, quite a lot of math. Got a Ph.D. in applied
| math.
|
| Yes, there are problems in writing math, that is, some math is
| poorly written.
|
| But, some math is quite nicely written. (1) Of course, at least
| define every symbol before using it. (2) It helps to motivate
| some math before presenting it. (3) Sometimes intuitive statments
| can help.
|
| For more, carefully reading some well-written math can help
| learning how to write math well:
|
| Paul R.\ Halmos, {\it Finite-Dimensional Vector Spaces, Second
| Edition,\/} D.\ Van Nostrand Company, Inc., Princeton, New
| Jersey, 1958.\ \
|
| R.\ Creighton Buck, {\it Advanced Calculus,\/} McGraw-Hill, New
| York, 1956.\ \
|
| Tom M.\ Apostol, {\it Mathematical Analysis: Second Edition,\/}
| ISBN 0-201-00288-4, Addison-Wesley, Reading, Massachusetts,
| 1974.\ \
|
| H.\ L.\ Royden, {\it Real Analysis: Second Edition,\/} Macmillan,
| New York, 1971.\ \
|
| Walter Rudin, {\it Real and Complex Analysis,\/} ISBN
| 07-054232-5, McGraw-Hill, New York, 1966.\ \
|
| Leo Breiman, {\it Probability,\/} ISBN 0-89871-296-3, SIAM,
| Philadelphia, 1992.\ \
|
| Jacques Neveu, {\it Mathematical Foundations of the Calculus of
| Probability,\/} Holden-Day, San Francisco, 1965.\ \
| jhanschoo wrote:
| This isn't just about good writing for math; the post author
| was trying to verify FLT as is developed in the literature, and
| along the way they discovered that a lemma underpinning a whole
| subfield is untrue, as was used. They nevertheless have
| confidence that the subfield is largely salvageable, by virtue
| of the faith that if it were bogus, someone would have already
| found negative results.
|
| But now they had to find a suitable replacement to underpin the
| field.
| ykonstant wrote:
| I am extremely disappointed at the replies from (some) experts.
| As a mathematician who has been worrying about the state of the
| literature for some time, I expected trouble like this---and
| expect considerably more, especially from the number theory
| literature between the 60s and the 90s. I also wonder how well
| 4-manifold theory is faring.
|
| Much worse, this nonchalant attitude is being taught to PhD
| students and postdocs both explicitly and implicitly: if you are
| worried too much, maybe you are not smart enough to understand
| the arguments/your mathematical sense is not good enough to
| perceive the essence of the work. If you explain too much, your
| readers will think you think they are dumb; erase this page from
| your paper (actual referee feedback).
|
| Also, like Loeffler in the comments, I don't trust the "people
| have been using crystalline cohomology forever without trouble"
| argument. The basics are correct, yes, as far as I can tell
| (because I verified them myself, bearing in mind of course that I
| am very fallible).
|
| But precisely because of that, large swathes of the theory _will_
| be correct. Errors _will_ be rare and circumstantial, and that is
| part of the problem! It makes them very easy to creep into a line
| of work and go unnoticed for a long time, especially if the
| expert community of the area is tiny---as is the case in most
| sub-areas of research math.
| cobbal wrote:
| One of the best things about proof assistants is that they're
| not convinced by how "obvious" you think something is. It's
| much harder for a human to resist proof by social forces such
| as intimidation, confidence, and "it's well known".
| zyklu5 wrote:
| Re: that dig at 4-manifolds
|
| Are you aware of the book on [The Disc Embedding
| Theorem](https://academic.oup.com/book/43693) based on 12
| lectures Freedman gave roughly a decade ago.
| modeless wrote:
| > it was absolutely clear to both me and Antoine that the proofs
| of the main results were of course going to be fixable, even if
| an intermediate lemma was false, because crystalline cohomology
| has been used so much since the 1970s that if there were a
| problem with it, it would have come to light a long time ago.
|
| I've always wondered if this intuition was really true. Would it
| really be so impossible for a whole branch of mathematics to be
| developed based on a flawed proof and turn out to be simply
| false?
| staunton wrote:
| This has happened before, see, e.g. the biography of Vladimir
| Voevodsky. Spoiler: the world kept spinning.
| moomin wrote:
| Russell's paradox did the same thing. They had to go back to
| the drawing board.
| moomin wrote:
| I remember when I was a student a friend of mine telling me this
| guy was giving a seminar and he'd just completed day one and
| everyone was really excited he was going to prove FLT. Of course,
| the guy in question was Andrew Wiles. He then spent months
| patching up problems they found prior to publication and finally
| the whole thing got published. It was a hugely exciting thing
| when you were studying mathematics.
|
| All of which is a long way of saying the line "the old-fashioned
| 1990s proof" makes me feel _really_ old.
| rateofclimb wrote:
| As a CS undergrad at Berkeley in the 90's I took an upper
| division math class in which we worked through the "old school"
| proof which was brand new and exciting then. Pretty much
| everyone else in the class was a math grad student. I don't
| think I understood more than 20% of the material! :)
| williamstein wrote:
| I took that class with you! It was amazing. Here are my
| notes: https://wstein.org/books/ribet-stein/
| dogboat wrote:
| There was a great documentary TV shown about this story.
| baruz wrote:
| This is the the talk by Dr de Frutos--Fernandez that Dr Buzzard
| mentions at the end: https://m.youtube.com/watch?v=QCRLwC5JQw0
| tunesmith wrote:
| Haha, that author is a funny writer. Very weird experience for
| that to be so readable, when I didn't understand probably half
| the content.
|
| By the way, I found an excellent word for when a proof is
| disproven or found to be faulty, but that is esoteric enough that
| it has less risk of being misinterpreted to mean the conclusion
| is proven false: 'vitiated'. The conclusion might still be true,
| it just needs a new or repaired proof; the initial proof is
| 'vitiated'. I like how the word sounds, too.
| euroderf wrote:
| Perhaps more delightful to the ears to hear that a proof has
| been disemboweled.
| MrMcCall wrote:
| One of my favorite Horizon episodes is the FLT one that features
| Prof. Andrew Wiles' development of his proof (I have watched it
| many times). Of course, it is grounded in Fermat's margin note
| about his having a wonderful proof that couldn't fit in said
| margin. At the end of the documentary, the various mathematicians
| in it note that AW's proof certaintly wasn't what PdF had in mind
| because AW's proof is thorougly modern.
|
| So I have wondered about PdF's possible thinking.
|
| Now, the degenerate case of n=2 is just the Pythagorean Theorem
| c^2 = a^2 + b^2
|
| and we now know from AW's proof that the cubic and beyond fail.
|
| Now, the PT is successful because the square b^2 can be
| "squished" over the corner of a^2, leaving a perfect square c^2.
| [Let's let the a^n part be the bigger of the two.]
| 5^2 = 4^2 + 3^2 25 = 16 + 9 25 = 16 + (4 +
| 4 + 1)
|
| Each of the 4's go on the sides, and the 1 goes on the corner,
| leaving a pure 5x5 square left over.
|
| Now, for cubes, we now know that a cube cannot be "squished" onto
| another cube's corner in such a way that makes a bigger cube.
|
| I'm not up for breaking out my (diminishing) algebra right now
| (as it's a brain off day), but that b^3 cube would need to break
| down into three flat sides, three linear edges, and the corner.
|
| This fits my physical intuition of the problem and seems to me to
| be a possible way that PdF might have visualized the cubic form.
| Now, I have zero mathematical intuition about such things (or
| necessary math skills either), but the physical nature of the
| problem, plus the fact that we now know that the cubic and beyond
| don't work, leads me to wonder if this is an interesting
| approach. It also makes me curious to know why it isn't, if that
| is indeed the case, (which I assume is probable).
|
| As to the n=4 and beyond, I would just hand-wave them away and
| say something like, "Well, of course they are more fractally
| impossible", which by that I mean that a real mathematician would
| have to say exactly why the n=3 case failing means that the n>=4
| case would also not work. (My guess here is that there just
| becomes less and less possibility of matching up higher-
| dimensional versions of a two-term addition.)
|
| Anyway, just a thought I had some years ago. I even built a lego
| model of the bits of cube spreading out along the corner of the
| a^3 cube.
|
| I would enjoy learning why I'm so utterly wrong about this, but I
| found it a fun mental exercise some years ago that's been
| swimming around in my brain.
|
| Thanks in advance if there are any takers around here. :-)
|
| [And, my favorite Horizon episode is 'Freak Wave'.]
| sam_goody wrote:
| Richard Feynman, while still a student at Princeton, found an
| error in some wel known proof, and set himself a rule to double
| check every theorem he would use.
|
| I don't remember the details of the story (I read surely your
| joking years ago), and remember being amazed by how much time
| that policy must have cost him.
|
| But now I wonder that he didn't hit dozens or hundreds of errors
| over the years.
| jebarker wrote:
| Formalization of maths seems like an overwhelmingly huge task. A
| bit like Cyc but harder. Is there some kind of chain reaction
| likely to happen once a critical mass of math has been formalized
| so that it becomes easier to do more?
___________________________________________________________________
(page generated 2024-12-12 23:00 UTC)