[HN Gopher] The cultural divide between mathematics and AI
___________________________________________________________________
The cultural divide between mathematics and AI
Author : rfurmani
Score : 270 points
Date : 2025-03-12 16:07 UTC (1 days ago)
(HTM) web link (sugaku.net)
(TXT) w3m dump (sugaku.net)
| mistrial9 wrote:
| > Throughout the conference, I noticed a subtle pressure on
| presenters to incorporate AI themes into their talks, regardless
| of relevance.
|
| This is well-studied and not unique to AI, the USA in English, or
| even Western traditions. Here is what I mean: a book called
| Diffusion of Innovations by Rogers explains a history of
| technology introduction.. if the results are tallied in
| population, money or other prosperity, the civilizations and
| their language groups that have systematic ways to explore and
| apply new technology are "winners" in the global context.
|
| AI is a powerful lever. The meta-conversation here might be
| around concepts of cancer, imbalance and chairs on the deck of
| the Titanic.. but this is getting off-topic for maths.
| golol wrote:
| I think another way to think about this is that subtly trying
| to consider AI in your AI-unrelated research is just respecting
| the bitter lesson. You need to at least consider how a data-
| driven approach might work for your problem. It could totally
| wipe you out - make your approach pointless. That's the bitter
| lesson.
| golol wrote:
| Nice article. I didn't read every section in detail but I think
| it makes a good point that AI researchers maybe focus too much on
| the thought of creating new mathematics while being able to
| repdroduce, index or formalize existing mathematics is really
| they key goal imo. This will then also lead to new mathematics. I
| think the more you advance in mathematical maturity the bigger
| the "brush" becomes with which you make your strokes. As an
| undergrad a stroke can be a single argument in a proof, or a
| simple Lemma. As a professor it can be a good guess for a well-
| posedness strategy for a PDE. I think AI will help humans find
| new mathematics with much bigger brush strokes. If you need to
| generalize a specific inequality on the whole space to Lipschitz
| domains, perhaps AI will give you a dozen pages, perhaps even of
| formalized Lean, in a single stroke. If you are a scientist and
| consider an ODE model, perhaps AI can give you formally verified
| error and convergence bounds using your specific constants. You
| switch to a probabilistic setting? Do not worry. All of these are
| examples of not very deep but tedious and non-trivial
| mathematical busywork that can take days or weeks. The
| mathematical ability necessary to do this has in my opinion
| already been demonstrated by o3 in rare cases. It can not piece
| things together yet though. But GPT-4 could not piece together
| proofs to undergrad homework problems while o3 now can. So I
| believe improvement is quite possible.
| esafak wrote:
| AI is young, and at the center of the industry spotlight, so it
| attracts a lot of people who are not in it to understand
| anything. It's like when the whole world got on the Internet, and
| the culture suddenly shifted. It's a good thing; you just have to
| dress up your work in the right language, and you can get
| funding, like when Richard Bellman coined the term "dynamic
| programming" to make it palatable to the Secretary of Defense,
| Charles Wilson.
| deadbabe wrote:
| AI has been around since at least the 1970s.
| tromp wrote:
| Or 1949 if you consider the Turing Test, or 1912 if you
| consider Torres Quevedo's machine El Ajedrecista that plays
| rook endings. The illusion of AI dates back to 1770's The
| Turk.
| abstractbill wrote:
| Yes, and all of these dates would be considered "young" by
| most mathematicians!
| int_19h wrote:
| "[The Analytical Engine] might act upon other things
| besides number, were objects found whose mutual fundamental
| relations could be expressed by those of the abstract
| science of operations, and which should be also susceptible
| of adaptations to the action of the operating notation and
| mechanism of the engine...Supposing, for instance, that the
| fundamental relations of pitched sounds in the science of
| harmony and of musical composition were susceptible of such
| expression and adaptations, the engine might compose
| elaborate and scientific pieces of music of any degree of
| complexity or extent." - Ada Lovelace, 1842
| bluefirebrand wrote:
| Not in any way that is relevant to the conversation about AI
| that has exploded this decade
| nicf wrote:
| I'm a former research mathematician who worked for a little while
| in AI research, and this article matched up very well with my own
| experience with this particular cultural divide. Since I've spent
| a lot more time in the math world than the AI world, it's very
| natural for me to see this divide from the mathematicians'
| perspective, and I definitely agree that a lot of the people I've
| talked to on the other side of this divide don't seem to quite
| get what it is that mathematicians want from math: that the
| primary aim isn't really to find out _whether_ a result is true
| but _why_ it 's true.
|
| To be honest, it's hard for me not to get kind of emotional about
| this. Obviously I don't know what's going to happen, but I can
| imagine a future where some future model is better at proving
| theorems than any human mathematician, like the situation, say,
| chess has been in for some time now. In that future, I would
| still care a lot about learning why theorems are true --- the
| process of answering those questions is one of the things I find
| the most beautiful and fulfilling in the world --- and it makes
| me really sad to hear people talk about math being "solved", as
| though all we're doing is checking theorems off of a to-do list.
| I often find the conversation pretty demoralizing, especially
| because I think a lot of the people I have it with would probably
| really enjoy the thing mathematics actually is much more than the
| thing they seem to think it is.
| jasonhong wrote:
| Interestingly, the main article mentions Bill Thurston's paper
| "On Proof and Progress in Mathematics"
| (https://www.math.toronto.edu/mccann/199/thurston.pdf), but
| doesn't mention a quote from that paper that captures the
| essence of what you wrote:
|
| > "The rapid advance of computers has helped dramatize this
| point, because computers and people are very different. For
| instance, when Appel and Haken completed a proof of the 4-color
| map theorem using a massive automatic computation, it evoked
| much controversy. I interpret the controversy as having little
| to do with doubt people had as to the veracity of the theorem
| or the correctness of the proof. Rather, it reflected a
| continuing desire for human understanding of a proof, in
| addition to knowledge that the theorem is true."
|
| Incidentally, I've also a similar problem when reviewing HCI
| and computer systems papers. Ok sure, this deep learning neural
| net worked better, but what did we as a community actually
| learn that others can build on?
| nicf wrote:
| The Four Color Theorem is a great example! I think this story
| is often misrepresented as one where mathematicians _didn 't
| believe_ the computer-aided proof. Thurston gets the story
| right: I think basically everyone in the field took it as
| resolving the _truth_ of the Four Color Theorem --- although
| I don 't think this was really in serious doubt --- but in an
| incredibly unsatisfying way. They wanted to know what
| underlying pattern in planar graphs forces them all to be
| 4-colorable, and "well, we reduced the question to these tens
| of thousands of possible counterexamples and they all turned
| out to be 4-colorable" leaves a lot to be desired as an
| answer to that question. (This is especially true because the
| _Five_ Color Theorem does have a very beautiful proof. I
| reach at a math enrichment program for high schoolers on
| weekends, and the result was simple enough that we could get
| all the way through it in class.)
| rssoconnor wrote:
| Is the proof of the Four Colour Theorem really that
| unsatisfying?
|
| The Four Colour Theorem is true because there exists a
| finite set of unavoidable yet reducible configurations.
| QED.
|
| To verify this computational fact one uses a (very)
| glorified pocket calculator.
| godelski wrote:
| I think you're missing the core component. We care
| __WHY__ the theorem is true. To be honest, the __IF__
| part matters a lot less.
|
| The thing is that the underlying reasoning (the logic) is
| what provides real insights. This is how we recognize
| other problems that are similar or even identical. The
| steps in between are just as important, and often more
| important.
|
| I'll give an example from physics. (If you're unsatisfied
| with this one, pick another physics fact and I'll do my
| best. _Any_ will do.) Here's a "fact"[0]: The atoms with
| even number of electrons are more stable than those with
| an odd number. We knew this in the 1910's, and this is a
| fact that directly led to the Pauli Exclusion Principle,
| which led us to better understand chemical bonds. Asking
| why Pauli Exclusion happens furthers our understanding
| and leading us to a better understanding of the atomic
| model. It goes on and on like this.
|
| It has always been about the why. The why is what leads
| us to new information. The why is what leads to
| generalization. The why is what leads to causality and
| predictive models. THe why is what makes the fact useful
| in the first place.
|
| [0] Quotes are because truth is very very hard to derive.
| https://hermiene.net/essays-
| trans/relativity_of_wrong.html
| rssoconnor wrote:
| I do think the _why_ that the Four Colour Theorem is true
| is captured my statement. The reason _why_ it is true is
| because there exists some finite unavoidable and
| reducible set of configurations.
|
| I'm fairly sure that people are only getting hung up on
| the size of this finite set, for no good reason. I
| suspect that if the size of this finite set were 2,
| instead of 633, and you could draw these unavoidable
| configuration on the chalk board, and easily illustrate
| that both of them are reducible, everyone would be saying
| "ah yes, the four colour theorem, such an elegant proof!"
|
| Yet, whether the finite set were of size 2 or size 633,
| the fundamental insight would be identical: there exists
| some finite unavoidable and reducible set of
| configurations.
| gsf_emergency_2 wrote:
| If it were size 2, we could more easily make sure that
| the answer is definitely mind-blowing.
|
| Have programmers given up on wanting their mind blown by
| unbelievable simplicity?
| godelski wrote:
| I really doubt this. I mean mathematicians spent decades
| trying to answer if the number 2 exists. People spend a
| lot of time on what seems fairly mundane and frankly, the
| results are quite beneficial. What's incredible or mind
| blowing is really just about your perspective, it is
| really just about your choice to wonder more.
| https://www.youtube.com/shorts/lcQAWEqPmeg
| roenxi wrote:
| > I'm fairly sure that people are only getting hung up on
| the size of this finite set, for no good reason.
|
| I think that is exactly correct, except for the "no good
| reason" part. There aren't many (any?) practical
| situations where the 4-colour theory's provability
| matters. So the major reason for studying it is coming up
| with a pattern that can be used in future work.
|
| Having a pattern with a small set (single digit numbers)
| means that it can be stored in the human brain. 633
| objects can't be. That limits the proof.
| myst wrote:
| The nature does not care whether it fits in our brains.
| dartos wrote:
| That's why we use math to describe nature in a way that
| fits in our brains.
|
| That's the whole point of math.
| username332211 wrote:
| > So the major reason for studying it is coming up with a
| pattern that can be used in future work.
|
| Surely, reducing the infinite way in which polygons can
| be placed on a plane to a finite set, no matter how
| large, must involve some pattern useful for future work?
| dartos wrote:
| But why stop at "some" pattern when you can find the most
| general pattern.
| seanhunter wrote:
| > The reason why it is true is because there exists some
| finite unavoidable and reducible set of configurations.
|
| OK but respectfully that's just restating the problem in
| an alternative form. We don't get any insight from it.
| Why does there exist this limit? What is it about _this_
| problem that makes _this_ particular structure happen?
| gsf_emergency_2 wrote:
| Mind-blowing result from a different _attempt_ to prove
| four color theorem
|
| https://blog.tanyakhovanova.com/2024/11/foams-made-out-
| of-fe...
| gsf_emergency_2 wrote:
| Fwiw fairly recently posted to HN was progress towards a
| more satisfying (read, likely mind-blowing) proof:
|
| https://blog.tanyakhovanova.com/2024/11/foams-made-out-
| of-fe...
|
| This is also nice because only pre-1600 tech involved
| eru wrote:
| > The Four Colour Theorem is true because there exists a
| finite set of unavoidable yet reducible configurations.
| QED.
|
| You just summarised (nearly) everything a mathematician
| can get out of that computerised proof. That's
| unsatisfying. It doesn't give you any insight into any
| other areas of math, nor does it suggest interesting
| corollaries, nor does it tell you which pre-condition of
| the statement does what work.
|
| That's rather underwhelming. That's less than you can get
| out of the 100th proof of Pythagoras.
| troymc wrote:
| Another example akin to the proof of the 4-color map theorem
| was the proof of the Kepler conjecture [1], i.e. "Grocers
| stack their oranges in the densest-possible way."
|
| We "know" it's true, but only because a machine ground
| mechanically through lots of tedious cases. I'm sure most
| mathematicians would appreciate a simpler and more elegant
| proof.
|
| [1] https://en.wikipedia.org/wiki/Kepler_conjecture
| Henchman21 wrote:
| I've worked in tech my entire adult life and boy do I feel this
| deep in my soul. I have slowly withdrawn from the higher-level
| tech designs and decision making. I usually disagree with all
| of it. Useless pursuits made only for resume fodder. Tech
| decisions made based on the bonus the CTO gets from the vendors
| (Superbowl tickets anyone?) not based on the suitability of the
| tech.
|
| But absolutely worst of all is the arrogance. The hubris. The
| thinking that because some human somewhere has figured a thing
| out that its then just implicitly _known_ by these types. The
| casual disregard for their fellow humans. The lack of true care
| for anything and anyone they touch.
|
| Move fast and break things!! _Even when its the society you
| live in_.
|
| That arrogance and/or hubris is just another type of stupidity.
| bluefirebrand wrote:
| > Move fast and break things!! Even when its the society you
| live in.
|
| This is the part I don't get honestly
|
| Are people just very shortsighted and don't see how these
| changes are potentially going to cause upheaval?
|
| Do they think the upheaval is simply going to be worth it?
|
| Do they think they will simply be wealthy enough that it
| won't affect them much, they will be insulated from it?
|
| Do they just never think about consequences at all?
|
| I am trying not to be extremely negative about all of this,
| but the speed of which things are moving makes me think we'll
| hit the cliff before even realizing it is in front of us
|
| That's the part I find unnerving
| feoren wrote:
| > Do they think they will simply be wealthy enough that it
| won't affect them much, they will be insulated from it?
|
| Yes, partly that. Mostly they only care about their rank.
| Many people would burn down the country if it meant they
| could be king of the ashes. Even purely self-interested
| people should welcome a better society for all, because a
| rising tide lifts all boats. But not only are they selfish,
| they're also very stupid, at least in this aspect. They
| can't see the world as anything but zero sum, and
| themselves as either winning or losing, so they must win at
| all costs. And those costs are huge.
| brobdingnagians wrote:
| Reminds me of the Paradise Lost quote, "Better to rule in
| Hell, than serve in Heaven", such an insightful book for
| understanding a certain type of person from Milton.
| Beautiful imagery throughout too, highly recommend.
| Henchman21 wrote:
| > Do they just never think about consequences at all?
|
| Yes, I think this is it. Frequently using social media and
| being "online" leads to less critical thought, less
| thinking overall, smaller window that you allow yourself to
| think in, thoughts that are merely sound bites not fully
| fleshed out thoughts, and so on. Ones thoughts can easily
| become a milieu of memes and falsehoods. A person whose
| mind is in the state will do whatever anyone suggests for
| that next dopamine hit!
|
| I am guilty of it all myself which is how I can make this
| claim. I too fear for humanity's future.
| unsui wrote:
| I've called this out numerous times (and gotten downvoted
| regularly), with what I call the "Cult of Optimization"
|
| aka optimization-for-its-own-sake, aka pathological
| optimization.
|
| It's basically meatspace internalizing and adopting the
| paperclip problem as a "good thing" to pursue, screw
| externalities and consequences.
|
| And, lo-and-behold, my read for why it gets downvoted here
| is that a lot of folks on HN ascribe to this mentality, as
| it is part of the HN ethos to optimize , often
| pathologically.
| jmount wrote:
| Love your point. "Lack of alignment" affects more than
| just AIs.
| chasd00 wrote:
| Humans like to solve problems and be at the top of the
| heap. Such is life, survival of the fittest after all. AI
| is a problem to solve, whoever gets to AGI first will be at
| the top of the heap. It's a hard drive to turn off.
| bluefirebrand wrote:
| In theory this is actually pretty easy to "turn off"
|
| You flatten the heap
|
| You decrease or eliminate the reward for being at the top
|
| You decrease or eliminate the penalty for being at the
| bottom
|
| The main problem is that we haven't figured out a good
| way to do this without creating a whole bunch of other
| problems
| Dracophoenix wrote:
| > Are people just very shortsighted and don't see how these
| changes are potentially going to cause upheaval?
|
| > Do they think the upheaval is simply going to be worth
| it?
|
| All technology causes upheaval. We've benefited from many
| of these upheavals to the point where it's impossible for
| most to imagine a world without the proliferation of
| movable type, the internal combustion engine, the computer,
| or the internet. All of your criticisms could have easily
| been made word for word by the Catholic Church during the
| medieval era. The "society" of today is no more of a sacred
| cow than its antecedent incarnations were half a millenium
| ago. As history has shown, it must either adapt, disperse,
| or die.
| bluefirebrand wrote:
| > The "society" of today is no more of a sacred cow than
| its antecedent incarnations were half a millenium ago. As
| history has shown, it must either adapt, disperse, or die
|
| I am not concerned about some kind of "sacred cow" that I
| want to preserve
|
| I am concerned about a future where those with power no
| longer need 90% of the population so they deploy
| autonomous weaponry that grinds most of the population
| into fertilizer
|
| And I'm concerned there are a bunch of short sighted
| idiots gleefully building autonomous weaponry for them,
| thinking they will either be spared from mulching, or be
| the mulchers
|
| Edit: The thing about appealing to history is that it
| also shows that when upper classes get too powerful they
| start to lose touch with everyone else, and this often
| leads to turmoil that affects the common folk most
|
| As one of the common folk, I'm pretty against that
| andrewl wrote:
| Exactly. It was described in Chesterton's Fence:
|
| There exists in such a case a certain institution or law;
| let us say, for the sake of simplicity, a fence or gate
| erected across a road. The more modern type of reformer
| goes gaily up to it and says, "I don't see the use of this;
| let us clear it away." To which the more intelligent type
| of reformer will do well to answer: "If you don't see the
| use of it, I certainly won't let you clear it away. Go away
| and think. Then, when you can come back and tell me that
| you do see the use of it, I may allow you to destroy it."
| dkarl wrote:
| > But absolutely worst of all is the arrogance. The hubris.
| The thinking that because some human somewhere has figured a
| thing out that its then just implicitly known by these types.
|
| I worked in an organization afflicted by this and still have
| friends there. In the case of that organization, it was
| caused by an exaggerated glorification of management over
| ICs. Managers truly did act according to the belief, and show
| every evidence of sincerely believing in it, that their
| understanding of every problem was superior to the sum of the
| knowledge and intelligence of every engineer under them in
| the org chart, not because they respected their engineers and
| worked to collect and understand information from them, but
| because managers are a higher form of humanity than ICs, and
| org chart hierarchy reflects natural superiority. Every
| conversation had to be couched in terms that didn't
| contradict those assumptions, so the culture had an extremely
| high tolerance for hand-waving and BS. Naturally this created
| cover for all kinds of selfish decisions based on politics,
| bonuses, and vendor perks. I'm very glad I got out of there.
|
| I wouldn't paint all of tech with the same brush, though.
| There are many companies that are better, much better. Not
| because they serve higher ideals, but because they can't
| afford to get so detached from reality, because they'd fail
| if they didn't respect technical considerations and respect
| their ICs.
| dang wrote:
| I'm sure that many of us sympathize, but can you please
| express your views without fulminating? It makes a big
| difference to discussion quality, which is why this is in the
| site guidelines:
| https://news.ycombinator.com/newsguidelines.html.
|
| It's not just that comments that vent denunciatory feelings
| are lower-quality themselves, though usually they are. It's
| that they exert a degrading influence on the rest of the
| thread, for a couple reasons: (1) people tend to respond in
| kind, and (2) these comments always veer towards the generic
| (e.g. "lack of true care for anything and anyone", "just
| another type of stupidity"), which is bad for curious
| conversation. Generic stuff is repetitive, and indignant-
| generic stuff doubly so.
|
| By the time we get further downthread, the original topic is
| completely gone and we're into "glorification of management
| over ICs" (https://news.ycombinator.com/item?id=43346257).
| Veering offtopic can be ok when the tangent is even more
| interesting (or whimsical) than the starting point, but most
| tangents aren't like that--mostly what they do is replace a
| more-interesting-and-in-the-key-of-curiosity thing with a
| more-repetitive-and-in-the-key-of-indignation thing, which is
| a losing trade for HN.
| lordleft wrote:
| I'm not a mathematician so please feel free to correct me...but
| wouldn't there still be an opportunity for humans to try to
| understand why a proof solved by a machine is true? Or are you
| afraid that the culture of mathematics will shift towards being
| impatient about this sorts of questions?
| nicf wrote:
| Well, it depends on exactly what future you were imagining.
| In a world where the model just spits out a totally
| impenetrable but formally verifiable Lean proof, then yes,
| absolutely, there's a lot for human mathematicians to do. But
| I don't see any particular reason things would have to stop
| there: why couldn't some model also spit out nice, beautiful
| explanations of why the result is true? We're certainly not
| there yet, but if we do get there, human mathematicians might
| not really be producing much of anything. What reason would
| there be to keep employing them all?
|
| Like I said, I don't have any idea what's going to happen.
| The thing that makes me sad about these conversations is that
| the people I talk to sometimes don't seem to have any
| appreciation for the thing they say they want to dismantle.
| It might even be better for humanity on the whole to arrive
| in this future; I'm not arguing that one way or the other!
| Just that I think there's a chance it would involve losing
| something I really love, and that makes me sad.
| GPerson wrote:
| I don't think the advent of superintelligence will lead to
| increased leisure time and increased well-being / easier
| lives. However, if it did I wouldn't mind redundantly
| learning the mathematics with the help of the AI. It's
| intrinsically interesting and ultimately I don't care to
| impress anybody, except to the extent it's necessary to be
| employable.
| nicf wrote:
| I would love that too. In fact, I already spend a good
| amount of my free time redundantly learning the
| mathematics that was produced by humans, and I have fun
| doing it. The thing that makes me sad to imagine --- and
| again, this is not a prediction --- is the loss of the
| community of human mathematicians that we have right now.
| nonethewiser wrote:
| >But I don't see any particular reason things would have to
| stop there: why couldn't some model also spit out nice,
| beautiful explanations of why the result is true?
|
| Oh... I didnt anticipate this would bother you. Would it be
| fair to say that its not that you like understanding why
| its true, because you have that here, but that you like
| process of discovering why?
|
| Perhaps thats what you meant originally. But my
| understanding was that you were primarily just concerned
| with understanding why, not being the one to discover why.
| nicf wrote:
| This is an interesting question! You're giving me a
| chance to reflect a little more than I did when I wrote
| that last comment.
|
| I can only speak for myself, but it's not that I care a
| lot about me personally being the first one to discover
| some new piece of mathematics. (If I did, I'd probably
| still be doing research, which I'm not.) There is
| something very satisfying about solving a problem for
| yourself rather than being handed the answer, though,
| even if it's not an original problem. It's the same
| reason some people like doing sudokus, and why those
| people wouldn't respond well to being told that they
| could save a lot of time if they just used a sudoku
| solver or looked up the answer in the back of the book.
|
| But that's not really what I'm getting at in the sentence
| you're quoting --- people are still free to solve sudokus
| even though sudoku solvers exist, and the same would
| presumably be true of proving theorems in the world we're
| considering. The thing I'd be most worried about is the
| destruction of the community of mathematicians. If math
| were just a fun but useless hobby, like, I don't know,
| whittling or something, I think there would be way fewer
| people doing it. And there would be even fewer people
| doing it as deeply and intensely as they are now when
| it's their full-time job. And as someone who likes math a
| lot, I don't love the idea of that happening.
| zmgsabst wrote:
| CNCs and other technology haven't destroyed woodworking.
| There's whole communities on YouTube -- with a spectrum
| from casual to hobbyist to artisanal to industrial.
|
| Why would mathematics be different than woodworking?
|
| Do you believe there's a limited demand for mathematics?
| -- my experience is quite the opposite, that we're
| limited by the production capacity.
| dinkumthinkum wrote:
| HN has this very unique and strange type of reasoning.
| You're actually asking why would mathematics be any
| different than woodworking because CNC machines? It's
| like aby issue can be reduced to the most mundane
| observations and simplicity because we have to justify
| all technology. Professional mathematics requires years
| of intense and usually, i.e. almost always, in graduate
| schools and the entire machinery of that. You're
| comparing something many people do as a hobby to the
| life's work and f others. of course you can have wave all
| this away with some argument but I'm not sure this type
| of reasoning is going to save the technocrats when it he
| majority of people realize what this app portends for
| society.
| nicf wrote:
| This is actually a metaphor I've used myself. I do think
| the woodworking community is both smaller and less
| professionalized than it would be in a world where
| industrial furniture production didn't exist. (This is a
| bizarre counterfactual, because it's basically impossible
| for me to imagine a world where industrial furniture
| production doesn't exist but YouTube does, but like
| pretend with me here for a moment.) I don't know that
| this is necessarily a bad thing, but it's definitely
| different, and I can imagine that if I were a woodworker
| who lived through the transition from one world to the
| other I would find it pretty upsetting! As I said above,
| I'm not claiming it's not worth making the transition
| anyway, but it does come with a cost.
|
| One place I think the analogy breaks down, though, is
| that I think you're pretty severely underestimating the
| time and effort it takes to be productive at math
| research. I think my path is pretty typical, so I'll
| describe it. I went to college for four years and took
| math classes the whole time, after which I was nowhere
| near prepared to do independent research. Then I went to
| graduate school, where I received a small stipend to
| teach calculus to undergrads while I learned even more
| math, and at the end of four and a half years of that ---
| including lots of one-on-one mentorship from my advisor
| --- I just barely able to kinda sorta produce some
| publishable-but-not-earthshattering research. If I wanted
| to produce research I was actually proud of, it probably
| would have taken several more years of putting in reps on
| less impressive stuff, but I left the field before
| reaching that point.
|
| Imagine a world where any research I could have produced
| at the end of those eight and a half years would be
| inferior to something an LLM could spit out in an
| afternoon, and where a different LLM is a better calculus
| instructor than a 22-year-old nicf. (Not a high bar!) How
| many people are going to spend all those years learning
| all those skills? More importantly, why would they expect
| to be paid to do that while producing nothing the whole
| time?
| jebarker wrote:
| > The thing that makes me sad about these conversations is
| that the people I talk to sometimes don't seem to have any
| appreciation for the thing they say they want to dismantle
|
| Yes! This is what frustrates my about the pursuit of AI for
| the arts too.
| zmgsabst wrote:
| This seems obviously untrue: why would they be
| replicating it if they didn't want it?
|
| I see both cases as people who aren't well served by the
| artisanal version attempting to acquire a better-than-
| commoditized version because they want _more_ of that
| thing to exist. We regularly have both things in
| furniture and don't have any great moral crisis that
| chairs are produced mechanistically by machines. To me,
| both things sound like "how dare you buy IKEA furniture
| -- you have no appreciation of woodwork!"
|
| Maybe artisanal math proofs are more beautiful or some
| other aesthetic concern -- but what I'd like is proofs
| that business models are stable and not full of holes
| constructed each time a new ML pipeline deploys; which is
| the sort of boring, rote work that most mathematicians
| are "too good" to work on. But they're what's needed to
| prevent, eg, the Amazon 2018 hiring freeze.
|
| That's the need that, eg, automated theorem proving truly
| solves -- and mathematicians are being ignored (much like
| artist) by people they turn up their noses at.
| jebarker wrote:
| > why would they be replicating it if they didn't want
| it?
|
| Who is "they"?
|
| Most AI for math work is being done by AI researchers
| that are not themselves academic mathematicians
| (obviously there exceptions). Similarly, most AI for
| music and AI for visual art is being done by AI
| researchers that themselves are not professional
| musicians or artists (again, there are exceptions). This
| model can work fine if the AI researchers collaborate
| with mathematicians or artists to understand that the use
| of AI is actually useful in the workflow of those fields,
| but often that doesn't happen and there is a savior-like
| arrogance where AI researchers think they'll just
| automate those fields. Same thing happens in AI for
| medicine. So the reason many of those AI researchers want
| to do this is for the usual incentives - money and
| publications.
|
| Clearly, there are commercial use cases for AI in all
| these fields and those may involve removing humans
| entirely. But in the case of art, and I (and Hardy) would
| argue academic math, there's a human aspect that can't be
| removed. Both of those approaches can exist in the world
| and have value but AI can't replace Van Gogh entirely.
| It'll automate the process of creating mass produced
| artwork or become a tool that human artists can use. Both
| of those require understanding the application domain
| intimately, so my point stands I think.
| mvieira38 wrote:
| That is kind of hard to do. Human reasoning and computer
| reasoning is very different, enough so that we can't really
| grasp it. Take chess, for example. Humans tend to reason in
| terms of positions and tactics, but computers just brute
| force it (I'm ignoring stuff like Alpha Zero because
| computers were way better than us even before that). There
| isn't much to learn there, so GMs just memorize the computer
| moves for so and so situation and then go back to their past
| heuristics after n moves
| Someone wrote:
| > so GMs just memorize the computer moves for so and so
| situation and then go back to their past heuristics after n
| moves
|
| I think they also adjust their heuristics, based on looking
| at thousands of computer moves.
| gessha wrote:
| It's kind of like knowing the answer to the ultimate question
| of life and not knowing the question itself ;)
| weitendorf wrote:
| It would be like having the machine code to something amazing
| but lacking the ability to adequately explain it or modify it
| - the machine code is too big and complicated to follow, so
| unless you can express it or understand it in a better way,
| it can only be used exactly how it is already.
|
| In mathematics it is just as (if not moreso) important to be
| able to apply techniques used to solve novel proofs as it is
| to have the knowledge that the theorem itself is true. Not
| only might those techniques be used to solve similar problems
| that the theorem alone cannot, but it might even uncover
| wholly new mathematical concepts that lead you to mathematics
| that you previously could not even conceive of.
|
| Machine proofs in their current form are basically huge
| searches/brute forces from some initial statements to the
| theorem being proved, by way of logical inference.
| Mathematics is in some ways the opposite of this: it's about
| understanding why something is true, not solely whether it is
| true. Machine proofs give you _a_ path from A to B but that
| path could be understandable-but-not-generalizable (a brute
| force), not-generalizable-but-understandable (finding some
| simple application of existing theorems to get the result
| that mathematicians simply missed), or neither
| understandable-nor-generalizable (imagine gigabytes of pure
| propositional logic on variables with names like n098fne09
| and awbnkdujai).
|
| Interestingly, some mathematicians like Terry Tao are
| starting to experiment with combining LLMs with automated
| theorem proving, because it might help in both guiding the
| theorem-prover and explaining its results. I find that
| philosophically fascinating because LLMs rely on some
| practices which are not fully understood, hence the article,
| and may validate combining formal logic with informal
| intuition as a way of understanding the world (both in
| mathematics, and generally the way our own minds combine
| logical reasoning with imprecise language and feelings).
| mcguire wrote:
| Many years ago I heard a mathematician speaking about some open
| problem and he said, "Sure, it's possible that there is a
| simple solution to the problem using basic techniques that
| everyone has just missed so far. And if you find that solution,
| mathematics will pat you on the head and tell you to run off
| and play.
|
| "Mathematics advances by solving problems using new techniques
| because those techniques open up new areas of mathematics."
| psunavy03 wrote:
| That seems like a justification that is right on the knife's
| edge of being a self-licking ice cream cone.
| senderista wrote:
| Really? I've always had the impression that "elementary"
| proofs of hard problems are highly valued.
| ants_everywhere wrote:
| A proof of a long-open conjecture that uses only elementary
| techniques is typically long and convoluted.
|
| Think of the problem as requiring spending a certain amount
| of complexity to solve. If you don't spend it on developing
| a new way of thinking then you spend it on long and tedious
| calculations that nobody can keep in working memory.
|
| It's similar to how you can write an AI model in Pytorch or
| you can write down the logic gates that execute on the GPU.
| The logic gate representation uses only elementary
| techniques. But nobody wants to read or check it by hand.
| lupire wrote:
| That's the attitude of poor mathematicians who are insecure
| about their own faults.
| mb7733 wrote:
| What the hell is that quote? No, a simple proof is the
| absolute mathematical ideal!
| jvans wrote:
| in poker AI solvers tell you what the optimal play is and it's
| your job to reverse engineer the principles behind it. It cuts
| a lot of the guess work out but there's still plenty of hard
| work left in understanding the why and ultimately that's where
| the skill comes in. I wonder if we'll see the same in math
| optimalsolver wrote:
| If the shortest proof for some theorem is several thousand
| pages long and beyond the ability of any biological mind to
| comprehend, would mathematicians not care about it?
|
| Which is to say, if you only concern yourself with theorems
| which have short, understandable proofs, aren't you cutting
| yourself off from vast swathes of math space?
| nicf wrote:
| Hm, good question. It depends on what you mean. If you're
| asking about restricting which theorems we try to prove, then
| we definitely _are_ cutting ourselves off from vast swathes
| of math space, and we 're doing it on purpose! The article
| we're responding to talks about mathematicians developing
| "taste" and "intuition", and this is what I think the author
| meant --- different people have different tastes, of course,
| but most conceivable true mathematical statements are ones
| that everyone would agree are completely uninteresting;
| they're things like "if you construct these 55 totally
| unmotivated mathematical objects that no one has ever cared
| about according to these 18 random made-up rules, then none
| of the following 301 arrangements are possible."
|
| If you're talking about questions that are well-motivated but
| whose _answers_ are ugly and incomprehensible, then a milder
| version of this actually happens fairly often --- some major
| conjecture gets solved by a proof that everyone agrees is
| right but which also doesn 't shed much light on why the
| thing is true. In this situation, I think it's fair to
| describe the usual reaction as, like, I'm definitely happy to
| have the confirmation that the thing is true, but I would
| much rather have a nicer argument. Whoever proved the thing
| in the ugly way definitely earns themselves lots of math
| points, but if someone else comes along later and proves it
| in a clearer way then they've done something worth
| celebrating too.
|
| Does that answer your question?
| diamondage wrote:
| So Godel proved that there are true theorems that are
| unprovable. My hunch is that there is a fine grained
| version of this result <-- that there is a some
| distribution on the length of the proof for any given
| conjecture. If true that would mean that we better get used
| to dealing with long nasty proofs because they are a
| necessary part of mathematics...perhaps even, in some kind
| of Kolmogorov complexity-esque fashion, the almost-always
| bulk of it
| photonthug wrote:
| Agree that something like this does seem likely. And this
| line of thought also highlights the work of Chaitin, and
| the fact that the current discussion around AI is just
| the latest version of early-2000s quasi-empiricism[1]
| stuff that never really got resolved. Things like the
| 4-color theorem would seem to be just the puny top of
| really big iceberg, and it's probably not going away.
|
| The new spin on these older unresolved issues IHMO is
| really the black-box aspect of our statistical
| approaches. Lots of mathematicians that are fine with
| proof systems like Lean and some million-step process
| that _can in principle be followed_ are also happy with
| more open-ended automated search and exploration of model
| spaces, proof spaces, etc. But can they ever be really be
| happy with a million gigabyte network of weighted nodes
| masquerading as some kind of "explanation" though? Not a
| mathematician but I sympathize. Given the difficulty of
| building/writing/running it, that looks more like a
| product than like "knowledge" to me (compare this to how
| Lean can prove Godel on your laptop).
|
| Maybe it's easier to swallow the bitter pill of _poor
| quality explanations_ though after the technology itself
| is a little easier to actually handle. People hate ugly
| things less if they are practical, and actually something
| you can build pretty stuff on top on.
|
| https://en.wikipedia.org/wiki/Quasi-
| empiricism_in_mathematic...
| dullcrisp wrote:
| I'm not sure that's quite true. Say the proof of
| proposition P requires a minimum of N symbols. You could
| prove it in one paper that's N symbols long and nobody
| can read, or you can publish k readable papers, with an
| average length on the order of N/k symbols, and develop a
| theory that people can use.
|
| I think even if N is quite large, that just means it may
| take decades or millennia to publish and understand all k
| necessary papers, but maybe it's still worth the effort
| even if we can get the length-N paper right away. What
| are you going to do with a mathematical proof that no one
| can understand anyway?
| mb7733 wrote:
| > If the shortest proof for some theorem is several thousand
| pages long and beyond the ability of any biological mind to
| comprehend, would mathematicians not care about it?
|
| Care or not, what are they supposed to do with it?
|
| Sure, they can now assume the theorem to be true, but nothing
| stopped them from doing that before.
| godelski wrote:
| > the primary aim isn't really to find out whether a result is
| true but why it's true.
|
| I'm honestly surprised that there are mathematicians that think
| differently (my background[0]). There are so many famous
| mathematicians stating this through the years. Some more subtle
| like Poincare stating that math is not the study of numbers but
| the relationship between them, while others far more explicit.
| This sounds more like what I hear from the common public who
| think mathematics is discovered and not invented (how does
| anyone think anything different after taking Abstract
| Algebra?).
|
| But being over in the AI/ML world now, this is my NUMBER ONE
| gripe. Very few are trying to understand why things are
| working. I'd argue that the biggest reason machines are black
| boxes are because no one is bothering to look inside of them.
| You can't solve things like hallucinations and errors without
| understanding these machines (and there's a lot we already do
| understand). There's a strong pushback against mathematics and
| I really don't understand why. It has so many tools that can
| help us move forward, but yes, it takes a lot of work. It's bad
| enough I know people who have gotten PhDs from top CS schools
| (top 3!) and don't understand things like probability
| distributions.
|
| Unfortunately doing great things takes great work and great
| effort. I really do want to see the birth of AI, I wouldn't be
| doing this if I didn't, but I think it'd be naive to believe
| that this grand challenge can entirely be solved by one field
| and something so simple as throwing more compute (data,
| hardware, parameters, or however you want to reframe the Bitter
| Lesson this year).
|
| Maybe I'm biased because I come from physics where we only care
| about causal relationships. The "_why_" is the damn
| Chimichanga. And I should mention, we're very comfortable in
| physics working with non-deterministic systems and that doesn't
| mean you can't form causal relationships. That's what the last
| hundred and some odd years have been all about.[1]
|
| [0] Undergrad in physics, moved to work as engineer, then went
| to grad school to do CS because I was interested in AI and
| specifically in the mathematics of it. Boy did I become
| disappointment years later...
|
| [1] I think there is a bias in CS. I notice there is a lot of
| test driven development, despite that being well known to be
| full of pitfalls. You unfortunately can't test your way into a
| proof. Any mathematician or physicist can tell you. Just
| because your thing does well on some tests doesn't mean there
| is proof of anything. Evidence, yes, but that's far from proof.
| Don't make the mistake Dyson did:
| https://www.youtube.com/watch?v=hV41QEKiMlM
| solveit wrote:
| > I'd argue that the biggest reason machines are black boxes
| are because no one is bothering to look inside of them.
|
| People do look, but it's extremely hard. Take a look at how
| hard the mechanistic interpretability people have to work for
| even small insights. Neel Nanda[1] has some very nice
| writeups if you haven't already seen them.
|
| [1]: https://www.neelnanda.io/mechanistic-interpretability
| jebarker wrote:
| The problem is that mechanistic interpretability is a lot
| like neuroscience or molecular biology, i.e. you're trying
| to understand huge complexity from relatively crude point
| measurements (no offense intended to neuroscientists and
| biologists). But AI wants publishable results yesterday. I
| often wonder whether the current AI systems will stay
| around long enough for anyone to remain interested in
| understanding why they ever worked.
| godelski wrote:
| People will always be interested in why things work. At
| least one will as long as I'm alive, but I really don't
| think I'm that special. Wondering why things are the way
| they are is really at the core of science. Sure, there
| are plenty of physicists, mathematicians,
| neuroscientists, biologists, and others who just want
| answers, but this is a very narrow part of science.
|
| I would really encourage others to read works that go
| through the history of the topic they are studying. If
| you're interested in quantum mechanics, the one I'd
| recommend is "The Quantum Physicists" by William
| Cropper[0]. It won't replace Griffiths[1] but it is a
| good addition.
|
| The reason that getting information like this is VERY
| helpful is that it teaches you how to solve problems and
| actually go into the unknown. It is easy to learn things
| from a book because someone is there telling you all the
| answers, but texts like these instead put yourself in the
| shoes of the people in those times, and focus on seeing
| what and why certain questions are being asked. This is
| the hard thing when you're at the "end". When you can't
| just read new knowledge from a book, because there is no
| one that knows! Or the issue Thomas Wolf describes
| here[2] and why he struggled.
|
| [0] https://www.amazon.com/Quantum-Physicists-
| Introduction-Their...
|
| [1] https://www.amazon.com/Introduction-Quantum-
| Mechanics-David-...
|
| [2] https://thomwolf.io/blog/scientific-ai.html
| godelski wrote:
| > People do look
|
| This was never in question > Very few are
| trying to understand why things are working
|
| What is in question is why this is given so little
| attention. You can hear Neel talk about this himself. It is
| the reason he is trying to rally people and get more into
| Mech Interp. Which frankly, this side of ML is as old as ML
| itself.
|
| Personal, I believe that if you aren't trying to interpret
| results and ask the why then you're not actually doing
| science. Which is fine. There's plenty of good things that
| come from outside science. I just think it's weird to call
| something science if you aren't going to do hypothesis
| testing and finding out why things are the way they are
| 7402 wrote:
| I understand the emotion and the sadness you mention from a
| different situation I experienced about a dozen years ago. At
| that time I was entering Kaggle machine learning competitions,
| and I did well enough to reach 59th on the global leaderboard.
| But the way I did it was by trying to understand the problem
| domain and make and test models based on that understanding.
|
| However by the end of that period, it seemed to transition to a
| situation where the most important skill in achieving a good
| score was manipulating statistical machine learning tools
| (Random Forests was a popular one, I recall), rather than
| gaining deep understanding of the physics or sociology of the
| problem, and I started doing worse and I lost interest in
| Kaggle.
|
| So be it. If you want to win, you use the best tools. But the
| part that brought joy to me was not fighting for the
| opportunity to win a few hundred bucks (which I never did), but
| for the intellectual pleasure and excitement of learning about
| an interesting problem in a new field that was amenable to
| mathematical analysis.
| kevindamm wrote:
| I witnessed the same bitter lesson on Kaggle, too. Late stage
| competitions were almost always won by a mixture of experts
| using the most recently successful models on that problem. Or
| a random forest of the same. It was a little
| frustrating/disappointing to the part of me that wanted to
| see insights, not just high scores.
| the_cat_kittles wrote:
| taking a helicopter to the top of a mountain is not the same
| thing as climbing it
| cladopa wrote:
| True. Taking a helicopter is way more impressive. The Everest
| was climbed in 1953 and the first helicopter to go there was
| in 2005. It is way harder thing to do.
| jlev1 wrote:
| No, in your analogy _building_ a helicopter capable of
| going there is impressive. (Though I dispute the idea that
| it's more impressive simply because helicopters were
| invented more recently than mountain climbing.) In any
| case, riding in a helicopter remains passive and in no
| sense impressive.
| SwtCyber wrote:
| Understanding why something is true - that's the beauty of it
| piokoch wrote:
| "I can imagine a future where some future model is better at
| proving theorems than any human mathematician" Please do not
| overestimate the power of the algorithm that is predicting next
| "token" (e.g. word) in a sequence of previously passed words
| (tokens).
|
| This algorithm will happily predict whatever it was fed with,
| just ask Chat GPT to write the review of non-existing camera,
| car or washing machine, you will receive nicely written list of
| advantages of such item, so what it does not exist.
| whynotminot wrote:
| I can also write you a review of a non-existent camera or
| washing machine. Or anything else you want a fake review of!
| Does that mean I'm not capable of reasoning?
| alternatex wrote:
| If you are not capable of distinguishing between truth and
| lie, and not capable of reflection which is the drive
| behind learning from past mistakes - then yes.
| Philpax wrote:
| Luckily, we have ways of verifying mathematical results, and
| using that to improve our AI systems:
| https://deepmind.google/discover/blog/ai-solves-imo-
| problems...
| crazygringo wrote:
| > _what it is that mathematicians want from math: that the
| primary aim isn 't really to find out whether a result is true
| but why it's true._
|
| I really wish that had been my experience taking undergrad math
| courses.
|
| Instead, I remember linear algebra where the professor would
| prove a result by introducing an equation pulled out of thin
| air, plugging it in, showing that the result was true, and that
| was that. OK sure, the symbol manipulation proved it was true,
| but zero understanding of why. And when I'd ask professors
| about the why, I'd encounter outright hostility -- all that
| mattered was whether it was proven, and asking "why" was
| positively amateurish and unserious. It was irrelevant to the
| truth of a result. The same attitude prevailed when it got to
| quantum mechanics -- "shut up and calculate".
|
| I know there are mathematicians who care deeply about the why,
| and I have to assume it's what motivates many of them. But my
| actual experience studying math was the polar opposite. And so
| I find it very surprising to hear the idea of math being
| described as being more interested in _why_ than _what_. The
| way I was taught didn 't just not care about the why, but
| seemed actively contemptuous of it.
| Ar-Curunir wrote:
| Math-for-engineering and math-for-math courses are very
| different in emphasis and enthusiasm. Many engineering
| students tend to not care too much about proofs, so the "get
| it working" approach might be fine for them. Also, the math
| profs teaching the "math-for-engineering" courses tend to
| view them as a chore (which it kind of is; pure math doesn't
| get a lot of funding, so they have to pick up these
| engineering-oriented courses to grab a slice of that
| engineering money).
| losvedir wrote:
| I guess, what university and what level of math was that?
|
| I majored in math at MIT, and even at the undergraduate level
| it was more like what OP is describing and less like what
| you're saying. I actually took linear algebra twice since my
| first major was Economics before deciding to add on a math
| major, and the version of linear algebra for your average
| engineer or economist (i.e.: a bunch of plug and chug
| matrices-type stuff), which is what I assume you're referring
| to, was very different. Linear algebra for mathematicians was
| all about vector spaces and bases and such, and was very
| interesting and full of proofs. I don't think actually
| concretely multiplying matrices was even a topic!
|
| So I guess linear algebra is one of those topics where the
| math side is interesting and very much what all the
| mathematicians here are describing, but where it turned out
| to be _so_ useful for everything, that there 's a non-
| mathematician version of it which is more like what it sounds
| like you experienced.
| ssivark wrote:
| As Heidegger pointed out in _"The question concerning
| technology"_ the driving mindset behind industrial technology
| is to turn everything into a (fungible) standing resource --
| instrumentalizing it and robbing it of any intrinsic meaning.
|
| Maybe because CS is more engineering than science (at least as
| far as what drives the sociology), a lot of people approach AI
| from the same industrial perspective -- be it applications to
| math, science, art, coding, and whatever else. Ideas like _the
| bitter lesson_ only reinforce the zeitgeist.
| DeepSeaTortoise wrote:
| TBH, I think you're worrying about a future that is likely to
| become much more fun than boring.
|
| For actual research mathematics, there is no reason why an AI
| (maybe not current entirely statistical models) shouldn't be
| able to guide you through it exactly the way you prefer to.
| Then it's just a matter of becoming honest with your own
| desires.
|
| But it'll also vastly blow up the field of recreational
| mathematics. Have the AI toss a problem your way you can solve
| in about a month. A problem involving some recent discoveries.
| A problem Franklin could have come up with. During a brothel
| visit. If he was on LSD.
| meroes wrote:
| My take is a bit different. I only have a math undergrad and only
| worked as an AI trainer so I'm quite "low" on the totem pole.
|
| I have listened to colin Mclarty talk about philosophy of math
| and there _was_ a contingent of mathematicians who solely cared
| about solving problems via "algorithms". The time period was just
| preceding the modern math since the late 1800s roughly, where the
| algorithmists, intuitivists, and logical oriented mathematicians
| coalesced into a combination that includes intuitive,
| algorithmic, and importance of logic, leading to the modern way
| we do proofs and focus on proofs.
|
| These algorithmists didn't care about the so called "meaningless"
| operations that got an answer, they just cared they got useful
| results.
|
| I think the article mitigates this side of math, and is the side
| AI will be best or most useful at. Having read AI proofs, they
| are terrible in my opinion. But if AI can prove something useful
| even if the proof is grossly unappealing to the modern
| mathematician, there should be nothing to clamor about.
|
| This is the talk I have in mind
| https://m.youtube.com/watch?v=-r-qNE0L-yI&pp=ygUlQ29saW4gbWN...
| throw8404948k wrote:
| > This quest for deep understanding also explains a common
| experience for mathematics graduate students: asking an advisor a
| question, only to be told, "Read these books and come back in a
| few months."
|
| With AI advisor I do not have this problem. It explains parts I
| need, in a way I understand. If I study some complicated topic,
| AI shortens it from months to days.
|
| I was somehow mathematically gifted when younger, sadly I often
| reinvented my own math, because I did not even know this part of
| math existed. Watching how Deepseek thinks before answering, is
| REALLY beneficial. It gives me many hints and references. Human
| teachers are like black boxes while teaching.
| sarchertech wrote:
| I think you're missing the point of what the advisor is saying.
| throw8404948k wrote:
| No, I get it.
|
| My point is human advisor does not have enough time, to
| answer questions and correctly explain the subject. I may get
| like 4 hours a week, if lucky. Books are just a cheap
| substitute for real dialog and reasoning with a teacher.
|
| Most ancient philosophy papers were in form of dialog. It is
| much faster to explain things.
|
| AI is a game changer. It shortens feedback loop from a week
| to hour! It makes mistakes (as humans do), but it is faster
| to find them. And it also develops cognitive skills while
| finding them.
|
| It is like programming in low level C in notepad 40 years
| ago. Versus high level language with IDE, VCS, unit tests...
|
| Or like farming resources in Rust. Booring repetitive
| grind...
| WhyOhWhyQ wrote:
| Books aren't just a lower quality version of dialog with a
| person though. They operate entirely differently. With very
| few people can you think quietly for 30 minutes straight
| without talking, but with a book you can put it down and
| come back to it at will.
|
| I don't think professional programmers were using notepad
| in 1985. Here's talk of IDEs from an article from 1985:
| https://dl.acm.org/doi/10.1145/800225.806843 It mentions
| Xerox Development Environment, from 1977
| https://en.wikipedia.org/wiki/Xerox_Development_Environment
|
| The feedback loop for programming / mathematics / other
| things I've studied was not a week in the year 2019. In
| that ancient time the feedback look was maybe 10% slower
| than with any of these LLMs since you had to look at Google
| search.
| sarchertech wrote:
| The point is that time and struggle are required for
| understanding. The advisor isn't telling the student to go
| read these books because he doesn't have time to explain.
|
| He's saying go read these books, wrestle with the ideas, go
| to bed, dream about them, think about them in the shower.
| Repeat that until you understand enough to understand the
| answer.
|
| There's no shortcut here. If you had unlimited time with
| the advisor he couldn't just sit you down and make you
| understand in a few sessions.
| ohgr wrote:
| I suspect you probably don't understand it after that. You
| think you do.
|
| I thought I understood calculus until I realised I didn't. And
| that took a bit thwack in the face really. I could use it but I
| didn't understand it.
| jgord wrote:
| Its not too late to hope for the current crop of LLMs to give
| rise to a benevolent, patient science based educator, like the
| "Young Ladies Illustrated Primer" of Neal Stephensons Diamond
| Age.
|
| We clearly will soon have the technology for that .. but it
| requires a rich opinionated benefactor, or inspired government
| agency to fund the development .. or perhaps it can be done as
| an Open model variant through crowdsourcing.
|
| An LLM personal assistant that detects my preferences and
| echoes my biases and massages my ego and avoids challenging me
| with facts and new ideas .. whose goal is to maximize
| screentime and credits for shareholder value .. seem to be
| where things are heading.
|
| I guess this is an argument for having open models.
| m0llusk wrote:
| > The last mathematicians considered to have a comprehensive view
| of the field were Hilbert and Poincare, over a century ago.
|
| Henri Cartan of the Bourbaki had not only a more comprehensive
| view, but a greater scope of the potential of mathematical
| modeling and description
| coffeeaddict1 wrote:
| I would also add Grothendieck to that list.
| woah wrote:
| > Perhaps most telling was the sadness expressed by several
| mathematicians regarding the increasing secrecy in AI research.
| Mathematics has long prided itself on openness and transparency,
| with results freely shared and discussed. The closing off of
| research at major AI labs--and the inability of collaborating
| mathematicians to discuss their work--represents a significant
| cultural clash with mathematical traditions. This tension recalls
| Michael Atiyah's warning against secrecy in research:
| "Mathematics thrives on openness; secrecy is anathema to its
| progress" (Atiyah, 1984).
|
| Engineering has always involved large amounts of both math and
| secrecy, what's different now?
| bo1024 wrote:
| AI is undergoing a transition from academic _research_ to
| industry _engineering_.
|
| (But the engineers want the benefits of academic research --
| going to conferences to give talks, credibility, intellectual
| prestige -- without paying the costs, e.g. actually sharing new
| knowledge and information.)
| analog31 wrote:
| It involves math at a research level, but from what I've
| observed, people in industry with engineering job titles make
| relatively little use of math. They will frequently tell you
| with that sheepish smile: "Oh, I'm not really a math person."
| Students are told with great confidence by older engineers that
| they'll never use their college math after they graduate.
|
| Not exactly AI by today's standards, but a lot of the math that
| they need has been rolled into their software tools. And Excel
| is quite powerful.
| xg15 wrote:
| > _One question generated particular concern: what would happen
| if an AI system produced a proof of a major conjecture like the
| Riemann Hypothesis, but the proof was too complex for humans to
| understand? Would such a result be satisfying? Would it advance
| mathematical understanding? The consensus seemed to be that while
| such a proof might technically resolve the conjecture, it would
| fail to deliver the deeper understanding that mathematicians
| truly seek._
|
| I think this is an interesting question. In a hypothetical SciFi
| world where we somehow provably know that AI is infallible and
| the results are always correct, you could imagine mathematicians
| grudgingly accepting some conjecture as "proven by AI" even
| without understanding the why.
|
| But for real-world AI, we know it can produce hallucinations and
| its reasoning chains can have massive logical errors. So if it
| came up with a proof that no one understands, how would we even
| be able to verify that the proof is indeed correct and not just
| gibberish?
|
| Or more generally, how do you verify a proof that you don't
| understand?
| tech_ken wrote:
| > Or more generally, how do you verify a proof that you don't
| understand?
|
| This is the big question! Computer-aided proof has been around
| forever. AI seems like just another tool from that box. Albeit
| one that has the potential to provide 'human-friendly' answers,
| rather than just a bunch of symbolic manipulation that must be
| interpreted.
| oersted wrote:
| Serious theorem-proving AIs always write the proof in a formal
| syntax where it is possible to check that the proof is correct
| without issue. The most popular such formal language is Lean,
| but there are many others. It's just like having a coding AI,
| it may write some function and you check if it compiles. If the
| AI writes a program/proof in Lean, it will only compile if the
| proof is correct. Checking the correctness of proofs is a much
| easier problem than coming up with the proof in the first
| place.
| nybsjytm wrote:
| > Checking the correctness of proofs is a much easier problem
| than coming up with the proof in the first place.
|
| Just so this isn't misunderstood, not so much cutting-edge
| math is presently possible to code in lean. The famous
| exceptions (such as the results by Clausen-Scholze and
| Gowers-Green-Manners-Tao) have special characteristics which
| make them much more ground-level and easier to code in lean.
|
| What's true is that it's very easy to check if a lean-coded
| proof is correct. But it's hard and time-consuming to
| formulate most math as lean code. It's something many AI
| research groups are working on.
| zozbot234 wrote:
| > The famous exceptions (such as the results by Clausen-
| Scholze and Gowers-Green-Manners-Tao) have special
| characteristics which make them much more ground-level and
| easier to code in lean.
|
| "Special characteristics" is really overstating it. It's
| just a matter of getting all the prereqs formalized in Lean
| first. That's a bit of a grind to be sure, but the Mathlib
| effort for Lean has the bulk of the undergrad curriculum
| and some grad subjects formalized.
|
| I don't think AI will be all that helpful wrt. this kind of
| effort, but it might help in some limited ways.
| oersted wrote:
| Yes I definitely concur, I have spent significant time with
| it.
|
| The main bottleneck is having the libraries that define the
| theorems and objects you need to operate at those levels.
| Everything is founded on axiomatic foundations and you need
| to build all of maths on top of that. Projects like mathlib
| are getting us there but it is a massive undertaking.
|
| It's not just that it is a lot of maths to go through, it's
| also that most maths has not really been proven to this
| degree of exactitude and there is much gap-filling to do
| when trying to translate existing proofs, or the reasoning
| style might be quite distant to how things are expressed in
| Lean. Some maths fields are also self-consistent islands
| that haven't been yet connected to the common axiomatic
| foundations, and linking them is a serious research
| endeavor.
|
| Although Lean does allow you to declare theorems as axioms.
| It is not common practice, but you can skip high up the
| abstraction ladder and set up a foundation up there if you
| are confident those theorems are correct. But still
| defining those mathematical objects can be quite hard on
| its own, even if you skip the proving.
|
| Anyways, the complexity of the Lean language itself doesn't
| help either. The mode of thinking you need to have to
| operate it is much closer to programming than maths, and
| for those that think that the Rust borrow-checker is a
| pain, this is an order of magnitude more complex.
|
| Lean was a significant improvement in ergonomics compared
| to the previous generation (Coq, Isabelle, Agda...), but
| still I think there is a lot of work to be done to make it
| mathematician-friendly.
|
| Most reinforcement-learning AI for maths right now is
| focused on olympiad problems, hard but quite low in the
| maths abstraction ladder. Often they don't even create a
| proof, they just solve problems that end with an exact
| result and you just check that. Perhaps the reasoning was
| incorrect, but if you do it for enough problems you can be
| confident that it is not just guessing.
|
| On the other side of the spectrum you have mathematicians
| like Tao just using ChatGPT for brainstorming. It might not
| be great at complex reasoning, but it has a much wider
| memory than you do and it can remind you of mathematical
| tools and techniques that could be useful.
| hewtronic wrote:
| > Anyways, the complexity of the Lean language itself
| doesn't help either. The mode of thinking you need to
| have to operate it is much closer to programming than
| maths, and for those that think that the Rust borrow-
| checker is a pain, this is an order of magnitude more
| complex.
|
| Could you elaborate on this? I'm interested to learn what
| the complexities are (beyond the mathematical concepts
| themselves).
| oersted wrote:
| Found something I wrote last year, see below, but off the
| top of my head:
|
| Something like 5 different DSLs in the same language,
| most of it in a purist functional style that is neither
| familiar to most mathematicians nor most programmers,
| with type-checking an order of magnitude more strict and
| complex than any programming language (that's the point
| of it), with rather obscure errors most of the time.
|
| It's really tedious to translate any non-trivial proofs
| to this model, so usually you end up proving it again
| almost from scratch within Lean, and then it's really
| hard to understand as it is written. Much of the
| information to understand a proof is hidden away as
| runtime data that is usually displayed via a complex
| VSCode extension. It's quite difficult to understand from
| the proof code itself, and usually it doesn't look
| anything like a traditional mathematical proof (even if
| they kind of try by naming keywords with a similar
| terminology as in normal proofs and sprinkling some
| unicode symbols).
|
| I never-ever feel like I'm doing maths when I'm using
| Lean. I'm fighting with the syntax to figure out how to
| express mathematical concepts in the style that it likes,
| always having several different ways of achieving similar
| things (anti Zen of Python). And I'm fighting with the
| type-checker to transform this abstract expression into
| this other abstract expression (that's really what a
| proof is when it boils down to it), completely forgetting
| about the mathematical meaning, just moving puzzle pieces
| around with obscure tools.
|
| And even after all of this, it is so much more ergonomic
| than the previous generation of proof-assistants :)
|
| ---
|
| I think that the main reasons for Lean's complexity are:
|
| - That it has a very purist functional style and syntax,
| literally reflecting the Curry-Howard Correspondence
| (function = proof), rather than trying to bridge the gap
| to more familiar maths notation.
|
| - That it aims to be a proof assistant, it is
| fundamentally semi-automatic and interactive, this makes
| it a hard design challenge.
|
| - A lot of the complexity is aimed at giving
| mathematicians the tools to express real research maths
| in it, on this it has been more successful than any
| alternative.
|
| - Because of this it has at least 5 different languages
| embedded in it: functional expressions of theorems,
| forward proofs with expression transformations, backward
| proofs with tactics, the tactics metaprogramming macro
| language, and the language to define data-types (and at
| least 4 kinds of data-types with different syntax).
| xg15 wrote:
| Ah, thanks for the clarification. Then the whole thing makes
| a lot more sense - though I'd say the outlook also becomes
| more optimistic.
|
| I thought the rhetoric sounded somewhat like the
| AGI/accelerationist folks who postulate some sort of eventual
| "godlike" AI whose thought processes are somehow
| fundamentally inaccessible to humans. So if you had a proof
| that was only understandable to this sort if AIs, then
| mathematics as a discipline of understanding would be over
| for good.
|
| But this sounds like it would at least theoretically let you
| tackle the proof? Like, it's imaginable that some AI
| generates a proof that is several TB (or EB) in size but
| still validates - which would of course be impossible to
| understand for human readers in the way you can understand a
| paper. But then "understanding" that proof would probably
| become a field of research of its own, sort of like the
| "BERTology" papers that try to understand the semantics of
| specific hidden states in BERT (or similar approaches for
| GPTs).
|
| So I'd see an incomprehensible AI-generated proof not as the
| end of research in some conjecture, but more as a sort of
| guidance: Unlike before, you now _know_ that the treasure
| chest exist and you even have its coordinates, you just don
| 't have the route to that location. The task then becomes
| about figuring out that route.
| nicf wrote:
| oersted's answer basically covers it, so I'm mostly just
| agreeing with them: the answer is that you use a computer. Not
| another AI model, but a piece of regular, old-fashioned
| software that has much more in common with a compiler than an
| LLM. It's really pretty closely analogous to the question "How
| do you verify that some code typechecks if you don't understand
| it?"
|
| In this hypothetical Riemann Hypothesis example, the only thing
| the human would have to check is that (a) the proof-
| verification software works correctly, and that (b) the
| statement of the Riemann Hypothesis at the very beginning is
| indeed a statement of the Riemann Hypothesis. This is orders of
| magnitude easier than proving the Riemann Hypothesis, or even
| than following someone else's proof!
| kkylin wrote:
| As Feynman once said [0]: "Physics is like sex. Sure, it may give
| some practical results, but that's not why we do it." I don't
| think it's any different for mathematics, programming, a lot of
| engineering, etc.
|
| I can see a day might come when we (research mathematicians, math
| professors, etc) might not exist as a profession anymore, but
| there will continue to be mathematicians. What we'll do to make a
| living when that day comes, I have no idea. I suspect many others
| will also have to figure that out soon.
|
| [0] I've seen this attributed to the Character of Physical Law
| but haven't confirmed it
| maroonblazer wrote:
| I'd not heard that Feynman quote before, so thanks for sharing;
| I love it.
|
| I'd include writing, art-, and music-making in that category.
| csomar wrote:
| Back to gambling? Mathematics is a relatively new career. My
| understanding is that these guys used to gamble about solving
| proofs for a living.
| SwtCyber wrote:
| In a way people don't do math just for its utility, they do it
| because it's beautiful, challenging, and deeply fulfilling
| tech_ken wrote:
| Mathematics is, IMO, not the axioms, proofs, or theorems. It's
| the human process of organizing these things into conceptual
| taxonomies that appeal to what is ultimately an aesthetic
| sensibility (what "makes sense"), updating those taxonomies as
| human understanding and aesthetic preferences evolve, as well as
| practical considerations ('application'). Generating proofs of a
| statement is like a biologist identifying a new species, critical
| but also just the start of the work. It's the macropatterns
| connecting the organisms that lead to the really important
| science, not just the individual units of study alone.
|
| And it's not that AI can't contribute to this effort. I can
| certainly see how a chatbot research partner could be super
| valuable for lit review, brainstorming, and even 'talking things
| through' (much like mathematicians get value from talking aloud).
| This doesn't even touch on the ability to generate potentially
| valid proofs, which I do think has a lot of merit. But the idea
| that we could totally outsource the work to a generative model
| seems impossible by definition. The point of the labor is develop
| _human_ understanding, removing the human from the loop changes
| the nature of the endeavor entirely (basically to algorithm
| design).
|
| Similar stuff holds about art (at a high level, and glossing over
| 'craft art'); IMO art is an expressive endeavor. One person
| communicating a hard-to-express feeling to an audience. GenAI can
| obviously create really cool pictures, and this can be grist for
| art, but without some kind of mind-to-mind connection and empathy
| the picture is ultimately just an artifact. The human context is
| what turns the artifact into art.
| EigenLord wrote:
| Is it really a culture divide or is it an economic incentives
| divide? Many AI researchers _are_ mathematicians. Any theoretical
| AI research paper will typically be filled with eye-wateringly
| dense math. AI dissolves into math the closer you inspect it. It
| 's math all the way down. What differs are the incentives. Math
| rewards openness because there's no real concept of a
| "competitive edge", you're incentivized to freely publish and
| share your results as that is how you get recognition and
| hopefully a chance to climb the academic ladder. (Maybe there
| might be a competitive spirit between individual mathematicians
| working on the same problems, but this is different than systemic
| market competition.) AI is split between being a scientific and
| capitalist pursuit; sharing advances can mean the difference
| between making a fortune or being outmaneuvered by competitors.
| It contaminates the motives. This is where the AI researcher's
| typical desire for "novel results" comes from as well, they are
| inheriting the values of industry to produce economic
| innovations. It's a tidier explanation to tie the culture
| differences to material motive.
| nybsjytm wrote:
| > Many AI researchers are mathematicians. Any theoretical AI
| research paper will typically be filled with eye-wateringly
| dense math. AI dissolves into math the closer you inspect it.
| It's math all the way down.
|
| There is a major caveat here. Most 'serious math' in AI papers
| is wrong and/or irrelevant!
|
| It's even the case for famous papers. Each lemma in Kingma and
| Ba's ADAM optimization paper is wrong, the geometry in McInnes
| and Healy's UMAP paper is mostly gibberish, etc...
|
| I think it's pretty clear that AI researchers (albeit surely
| with some exceptions) just don't know how to construct or
| evaluate a mathematical argument. Moreover the AI community (at
| large, again surely with individual exceptions) seems to just
| have pretty much no interest in promoting high intellectual
| standards.
| zipy124 wrote:
| I'd be interested to read about the gibberish in UMAP, I know
| the paper "An improvement of the convergence proof of the
| ADAM-Optimizer" for the lemma problem in the original ADAM
| but hadn't heard of the second one. Do you have any further
| info on it?
| skinner_ wrote:
| Amazing! I looked into your ADAM claim, and it checks out.
| Thanks! Now I'm curious. I you have the time, could you
| please follow up with the 'etc...'?
| nybsjytm wrote:
| There's a related section about 'mathiness' in section 3.3
| of the article "Troubling Trends in Machine Learning
| Scholarship" https://arxiv.org/abs/1807.03341. I would say
| the situation has only gotten worse since that paper was
| written (2018).
|
| However the discussion there is more about math which is
| unnecessary to a paper, not so much about the problem of
| math which is unintelligible or, if intelligible, then
| incorrect. I don't have other papers off the top of my
| head, although by now it's my default expectation when I
| see a math-centric AI paper. If you have any such papers in
| mind, I could tell you my thoughts on it.
| Xcelerate wrote:
| > Each lemma in Kingma and Ba's ADAM optimization paper is
| wrong
|
| Wrong in the strict formal sense or do you mean even wrong in
| "spirit"?
|
| Physicists are well-known for using "physicist math" that
| isn't formally correct but can easily be made as such in a
| rigorous sense with the help of a mathematician. Are you
| saying the papers of the AI community aren't even correct "in
| spirit"?
| nybsjytm wrote:
| Much physicist math can't be made rigorous so easily! Which
| isn't to say that much of it doesn't still have great
| value.
|
| However the math in AI papers is indeed different. For
| example, Kingma and Ba's paper self-presents as having a
| theorem with a rigorous proof via a couple of lemmas proved
| by a chain of inequalities. The key thing is that the
| mathematical details are purportedly all present, but are
| just wrong.
|
| This isn't at all like what you see in physics papers,
| which might just openly lack detail, or might use
| mathematical objects whose existence or definition remain
| conjectural. There can be some legitimate problems with
| that, but at least in the best cases it can be very
| visionary. (Mirror symmetry is a standard example.) By
| contrast I'm not sure what 'spirit' is even possible in a
| detailed couple-page 'proof' that its authors probably
| don't even fully understand. In most cases, the 'theorem'
| isn't remotely interesting enough as pure mathematics and
| is also not of any serious relevance to the empirical
| problem at hand. It just adds an impressive-looking section
| to the paper.
|
| I do think it's possible that in the future there will be
| very interesting pure mathematics inspired by AI. But it
| hasn't been found yet, and I'm very certain it won't come
| from reconsidering these kinds of badly-written theorems
| and proofs.
| mcguire wrote:
| Fundamentally, mathematics is about understanding why something
| is true or false.
|
| Modern AI is about "well, it looks like it works, so we're
| golden".
| nothrowaways wrote:
| You can't fake influence
| Sniffnoy wrote:
| > As Gauss famously said, there is "no royal road" to
| mathematical mastery.
|
| This is not the point, but the saying "there is no royal road to
| geometry" is far older than Gauss! It goes back at least to
| Proclus, who attributes it to Euclid.
| troymc wrote:
| I never understood that quote until recently.
|
| The story goes that the (royal) pharaoh of Egypt wanted to
| learn geometry, but didn't want to have to read Euclid. He
| wanted a faster route. But, "there is no royal road to
| geometry."
| FilosofumRex wrote:
| The last Egyptian pharaoh was Nectanebo II, who ruled from
| 358 to approximately 340 BC. Alexander founded Alexandria in
| 331 BC as the crown jewel of his empire where Euclid wrote
| his magnum opus, The Elements in 300 BC!
|
| Unless the royal pharaoh of Egypt, refers to Ptolemy I Soter,
| Macedonian general who was the first Ptolemaic Kingdom ruler
| of Egypt after Alexander's death.
| troymc wrote:
| Yep, exactly. Here's a translation of Proclus:
|
| "He [Euclid] lived in the time of Ptolemy the First, for
| Archimedes, who lived after the time of the first Ptolemy,
| mentions Euclid. It is also reported that Ptolemy once
| asked Euclid if there was not a shorter road to geometry
| that through the _Elements_ , and Euclid replied that there
| was no royal road to geometry."
|
| Source:
| http://aleph0.clarku.edu/~djoyce/java/elements/Euclid.html
| NooneAtAll3 wrote:
| I feel like this rumbling can be summarized as "Ai is
| engineering, not math" - and suddenly a lot of things make sense
|
| Why Ai field is so secretive? Because it's all trade secrets -
| and maybe soon to become patents. You don't give away precisely
| how semiconductor fabs work, only base research level of "this
| direction is promising"
|
| Why everyone is pushed to add Ai in? Because that's where the
| money is, that's where the product is.
|
| Why Ai needs results fast? Because it's production line, and you
| create and design stuff
|
| Even the core distinction mentioned - that Ai is about
| "speculation and possibility" - that's all about tool
| experimenting and prototyping. It's all about building and
| constructing. Aka Engineering/Technology letters of STEM
|
| I guess next step is to ask "what to do next?". IMO, math and Ai
| fields should realise the divide and slowly diverge, leaving each
| other alone on an arm's length. Just as engineers and programmers
| (not computer scientists) already do
| umutisik wrote:
| If AI can prove major theorems, it will likely by employing
| similar heuristics as the mathematical community employs when
| searching for proofs and understanding. Studying AI-generated
| proofs, with the help of AI to decipher contents will help humans
| build that 'understanding' if that is desired.
|
| An issue in these discussions is that mathematics is both an art,
| a sport, and a science. And the development of AI that can build
| 'useful' libraries of proven theorems means different things for
| each. The sport of mathematics will be basically over. The art of
| mathematics will thrive as it becomes easier to explore the
| mathematical world. For the science of mathematics, it's hard to
| say, it's been kind of shaky for ~50 years anyway, but it can
| only help.
| tylerneylon wrote:
| I agree with the overt message of the post -- AI-first folks tend
| to think about getting things working, whereas math-first people
| enjoy deeply understood theory. But I also think there's
| something missing.
|
| In math, there's an urban legend that the first Greek who proved
| sqrt(2) is irrational (sometimes credited to Hippasus of
| Metapontum) was thrown overboard to drown at sea for his
| discovery. This is almost certainly false, but it does capture
| the spirit of a mission in pure math. The unspoken dream is this:
|
| ~ "Every beautiful question will one day have a beautiful
| answer."
|
| At the same time, ever since the pure and abstract nature of
| Euclid's Elements, mathematics has gradually become a more
| diverse culture. We've accepted more and more kinds of "numbers:"
| negative, irrational, transcendental, complex, surreal,
| hyperreal, and beyond those into group theory and category
| theory. Math was once focused on measurement of shapes or
| distances, and went beyond that into things like graph theory and
| probabilities and algorithms.
|
| In each of these evolutions, people are implicitly asking the
| question:
|
| "What is math?"
|
| Imagine the work of introducing the sqrt() symbol into ancient
| mathematics. It's strange because you're defining a symbol as
| answering a previously hard question (what x has x^2=something?).
| The same might be said of integration as the opposite of a
| derivative, or of sine defined in terms of geometric questions.
| Over and over again, new methods become part of the canon by
| proving to be both useful, and in having properties beyond their
| definition.
|
| AI may one day fall into this broader scope of math (or may
| already be there, depending on your view). If an LLM can give you
| a verified but unreadable proof of a conjecture, it's still true.
| If it can give you a crazy counterexample, it's still false. I'm
| not saying math should change, but that there's already a nature
| of change and diversity within what math is, and that AI seems
| likely to feel like a branch of this in the future; or a close
| cousin the way computer science already is.
| tylerneylon wrote:
| PS After I wrote my comment, I realized: of course, AI could
| one day get better at the things that make it not-perfect in
| pure math today:
|
| * AI could get better at thinking intuitively about math
| concepts. * AI could get better at looking for solutions people
| can understand. * AI could get better at teaching people about
| ideas that at first seem abstruse. * AI could get better at
| understanding its own thought, so that progress is not only a
| result, but also a method for future progress.
| lmpdev wrote:
| I did a fair bit of applied mathematics at uni
|
| What I think Mathematicians should remind themselves is a lot of
| prestigious mathematicians, the likes of Cantor or Erdos, often
| only employed a handful of "tricks"/heuristics for their proofs
| over their career. They repeatedly and successfully applied these
| strategies into unsolved problems
|
| I argue would not take a tremendous jump in performance for an AI
| to begin their own journey similar in kind to the greats, the
| only thing standing in their way (as with all contemporary
| mathematicians) is the extreme specialisation required to reach
| the boundary of unsolved problems
|
| AI need not be Euler to be an important tool and figure within
| mathematics
| joe_the_user wrote:
| _What I think Mathematicians should remind themselves is a lot
| of prestigious mathematicians, the likes of Cantor or Erdos,
| often only employed a handful of "tricks" /heuristics for their
| proofs over their career._
|
| I know this claim is often made but it seems obvious that in
| this discussion, trick means something far wider and more
| subtle than any set computer program. In a lot of ways, "he
| just uses a few tricks" is akin to the way a mathematician will
| say "and the rest of the proof is elementary" (when it's still
| quite long and hard for anyone not versed in a given
| specialty). I mean, before category theory was formalized, the
| proofs that now are possible with it might classified as "all
| done with this trick" but grasping said trick was far from
| elementary matter.
|
| _I argue would not take a tremendous jump in performance for
| an AI to begin their own journey similar in kind to the greats,
| the only thing standing in their way (as with all contemporary
| mathematicians) is the extreme specialisation required to reach
| the boundary of unsolved problems._
|
| Not that LLMs can't do some impressive things but your
| narrative seems to anthropomorphize them in a less than useful
| way.
| lairv wrote:
| > A revealing anecdote shared at one panel highlighted the
| cultural divide: when AI systems reproduced known mathematical
| results, mathematicians were excited, while AI researchers were
| disappointed
|
| This seems very caricatural, one thing I've often heard in the AI
| community is that it'd be interesting to train models with an old
| data cutoff date (say 1900) and see whether the model is able to
| reinvent modern science
| j2kun wrote:
| This is written in the first person, but there is no listed
| author and the website does not suggest an author...
| mkl wrote:
| It's in the usual location at the top of the page: "By Ralph
| Furman".
| wanderingmind wrote:
| Terence Tao recently gave a lecture on Machine Assisted Proofs
| that helped even common folk like me to understand on the
| upcoming massive changes to Math within the next decade.
| Especially, its fascinating to see how AI and especially Lean
| might provide an avenue for large scale collaboration in Math
| Research, to bring them on par with how research is done in other
| sciences
|
| https://www.youtube.com/watch?v=5ZIIGLiQWNM
| FilosofumRex wrote:
| I find this cultural divide exists predominantly among
| mathematicians who consider existence proofs as real mathematics.
|
| Mathematicians who practice constructive math and view existence
| proofs as mere intellectual artifacts tend to embrace AI,
| physics, engineering and even automated provers as worthy
| subjects.
| weitendorf wrote:
| If you look closely at the history of mathematics you can see
| that it worked similarly to current AI in many respects (not so
| much the secrecy) - people were oftentimes just concerned with
| whether something worked rather than why it worked (eg so that
| they could build a building or compute something), and the full
| theoretical understanding of something sometimes came
| significantly later than the knowledge of whether something was
| true or useful.
|
| In fact, the modern practice (the concept predates the practice
| of course, but was more of an opinion than a ritual) of
| mathematics as this ultimate understandable system of truth and
| elegance seemingly began in Ancient Greece with their practice of
| proofs and early development of mathematical "frameworks". It
| didn't reach its current level of rigor and sophistication until
| 100-150 years ago when Formalism became the dominant school of
| thought (https://en.wikipedia.org/wiki/Formalism_(philosophy_of_m
| athe...), spearheaded by a group of mathematicians who held even
| deeper beliefs that are often referred to as Mathematical
| Platonism (https://en.wikipedia.org/wiki/Mathematical_Platonism).
| (Note that these wikipedia articles are not amazing explanations
| of the concepts, how they relate to realism, or developed
| historically but they are adequate primers)
|
| Of course, Godel proved that truths exists outside of these
| formal systems (only a couple decades after mathemticians had
| started building a secret religion around worshipping Logos.
| These beliefs were pervasive see eg Einsteins concept of God as a
| clockmaker or Erdos' references to "The Book"), which leaves us
| almost back where we started where we might need to consider
| there may be some empirical results and patterns which "work" but
| we do not fully understand - we may never understand them.
| Personally, I think this philosophically justifies not subjecting
| oneself to the burden of spending excess time understanding or
| proving things that have never been understood before - it may
| elude elegance (as the 4-color proof) or even knowability.
|
| We can always look backwards and explain things later, and of
| course, it's a false dichotomy that some theorems or results must
| be fully understood and proven (or proven elegantly) before they
| can be considered true and used as a basis for further results.
| Perhaps it is unsatisfying to those who wish to truly understand
| the universe in terms of mathematical elegance, but that asshole
| used mathematical elegance to disprove mathematical elegance as a
| perfect tool for understanding the universe already, so take it
| up with him.
|
| Personally, as someone who at one time heavily considered
| pursuing a life in mathematics in part because of its ability to
| answer deep truths, I think Godel set us free: to understand or
| know things, we cannot rely solely on mathematics. Formal
| mathematics itself tells us that there are things we can only
| understand by discovering them, building them, or experimenting
| with them. There are truths that Cuda Cowboys can uncover that
| LaTex Liturgy cannot
| krnsll wrote:
| As a mathematician, I can't help but simmer each time I find the
| profession's insistence on grasping the how's and why's of
| matters to be dismissed as pedantry. Actionable results are
| important but absent understanding, we will never have any grasp
| on downstream impact of such progress.
|
| I fear AI is just going to lower our general epistemic standards
| as a society, and we forget essential truth verifying techniques
| in the technical (and other) realms all together. Needless to say
| the impact this has on our society's ethical and effectively
| legal foundations, because ultimately without clarity on how's
| and why's it will be near impossible to justly assign damages.
| FabHK wrote:
| > One striking feature of mathematical culture that came up was
| the norm of alphabetical authorship. [...] There are some
| exceptions, like Adleman insisting on being last in the RSA
| paper.
|
| lol, took me a second to get the plausible reason for that
| SwtCyber wrote:
| If AI-generated proofs become incomprehensible to humans, do they
| still count as -math- in the traditional sense?
| sigmoid10 wrote:
| We already have proofs by exhaustion that could only ever be
| verified using computers. Some people would argue they are not
| "elegant" but I don't think anyone would argue they are not
| math.
| randomNumber7 wrote:
| Imo mathematicians want to be very smart, when a lot of ai is
| actually easy to undertand with good abstract and logic thinking
| and linear algebra.
| bwfan123 wrote:
| whaa ? have you read the article ?
| recursive wrote:
| I've known a few mathematicians, and none of them cared about
| being perceived as smart. They wanted to figure stuff out,
| which, incidentally, tended to actually make them smart.
| trostaft wrote:
| > Unlike many scientific fields, mathematics has no concept of
| "first author" or "senior author"; contributors are simply listed
| alphabetically.
|
| I don't think this is (generally) true? Speaking as a math
| postdoc right now, at least in my field of computational
| mathematics there's definitely a notion of first author. Though,
| a note of individual contributions at the bottom of the paper is
| becoming more common.
| nicf wrote:
| I was an algebraic geometer when I was still doing research in
| the field, and it was definitely true in that corner of the
| world. Authors are alphabetical, and you usually cite the paper
| by listing them all, no "et al"'s. I think I didn't even know
| there was such a thing as "first author" until I worked in ML.
| bwfan123 wrote:
| I had an aha moment recently. An excited AI researcher claimed,
| wow: claude could solve this IMO problem. Then, a mathematician
| pointed out a flaw which the AI researcher overlooked. The AI
| researcher then prompted the AI with the error and then the AI
| produced another proof he thought worked, but again was flawed.
| The AI played on the researcher's naivete.
|
| Long story short, current AI is doing cargo-cult math - ie, going
| through the motions with mimicry. Experts can see through it, but
| excited AI hypesters are blind, and lap it up. Even alpha-
| geometry (with built-in theorem prover) is largely doing brute-
| force search of a limited axiomatized domain. This is not to say
| AI is not useful, just that the hype exceeds the actual.
| xanderlewis wrote:
| Everything you've said is obvious, and yet really needs saying
| and isn't said enough.
| calibas wrote:
| > While mathematicians traditionally pursue understanding for its
| own sake, industry researchers must ultimately deliver products,
| features, or capabilities that create value for their
| organizations.
|
| This really isn't about mathematics or AI, this is about the gap
| between academia and business. The academic wants to pursue
| knowledge for the sake of knowledge, while a business wants to
| make money.
|
| Compare to computer science or engineering, where business has
| near completely pervaded the fields. I've never heard anybody
| lamenting their inability to "pursue understanding for its own
| sake" and when someone does advance the theory, there's also a
| conversation about how to make it profitable. The academic aspect
| isn't gone, but it's found a way to coexist with the business
| aspect, for better or worse.
|
| Honestly it sounds like mathematicians have had things pretty
| good if this is one of their biggest complaints.
| BrenBarn wrote:
| I think a lot of this is not so much "math vs. AI" as "anyone who
| cares about anything other than making as much money as possible
| vs. anyone who only cares about making as much money as
| possible".
___________________________________________________________________
(page generated 2025-03-13 23:02 UTC)