[HN Gopher] The cultural divide between mathematics and AI
___________________________________________________________________
The cultural divide between mathematics and AI
Author : rfurmani
Score : 130 points
Date : 2025-03-12 16:07 UTC (6 hours ago)
(HTM) web link (sugaku.net)
(TXT) w3m dump (sugaku.net)
| mistrial9 wrote:
| > Throughout the conference, I noticed a subtle pressure on
| presenters to incorporate AI themes into their talks, regardless
| of relevance.
|
| This is well-studied and not unique to AI, the USA in English, or
| even Western traditions. Here is what I mean: a book called
| Diffusion of Innovations by Rogers explains a history of
| technology introduction.. if the results are tallied in
| population, money or other prosperity, the civilizations and
| their language groups that have systematic ways to explore and
| apply new technology are "winners" in the global context.
|
| AI is a powerful lever. The meta-conversation here might be
| around concepts of cancer, imbalance and chairs on the deck of
| the Titanic.. but this is getting off-topic for maths.
| golol wrote:
| I think another way to think about this is that subtly trying
| to consider AI in your AI-unrelated research is just respecting
| the bitter lesson. You need to at least consider how a data-
| driven approach might work for your problem. It could totally
| wipe you out - make your approach pointless. That's the bitter
| lesson.
| golol wrote:
| Nice article. I didn't read every section in detail but I think
| it makes a good point that AI researchers maybe focus too much on
| the thought of creating new mathematics while being able to
| repdroduce, index or formalize existing mathematics is really
| they key goal imo. This will then also lead to new mathematics. I
| think the more you advance in mathematical maturity the bigger
| the "brush" becomes with which you make your strokes. As an
| undergrad a stroke can be a single argument in a proof, or a
| simple Lemma. As a professor it can be a good guess for a well-
| posedness strategy for a PDE. I think AI will help humans find
| new mathematics with much bigger brush strokes. If you need to
| generalize a specific inequality on the whole space to Lipschitz
| domains, perhaps AI will give you a dozen pages, perhaps even of
| formalized Lean, in a single stroke. If you are a scientist and
| consider an ODE model, perhaps AI can give you formally verified
| error and convergence bounds using your specific constants. You
| switch to a probabilistic setting? Do not worry. All of these are
| examples of not very deep but tedious and non-trivial
| mathematical busywork that can take days or weeks. The
| mathematical ability necessary to do this has in my opinion
| already been demonstrated by o3 in rare cases. It can not piece
| things together yet though. But GPT-4 could not piece together
| proofs to undergrad homework problems while o3 now can. So I
| believe improvement is quite possible.
| esafak wrote:
| AI is young, and at the center of the industry spotlight, so it
| attracts a lot of people who are not in it to understand
| anything. It's like when the whole world got on the Internet, and
| the culture suddenly shifted. It's a good thing; you just have to
| dress up your work in the right language, and you can get
| funding, like when Richard Bellman coined the term "dynamic
| programming" to make it palatable to the Secretary of Defense,
| Charles Wilson.
| deadbabe wrote:
| AI has been around since at least the 1970s.
| tromp wrote:
| Or 1949 if you consider the Turing Test, or 1912 if you
| consider Torres Quevedo's machine El Ajedrecista that plays
| rook endings. The illusion of AI dates back to 1770's The
| Turk.
| abstractbill wrote:
| Yes, and all of these dates would be considered "young" by
| most mathematicians!
| bluefirebrand wrote:
| Not in any way that is relevant to the conversation about AI
| that has exploded this decade
| nicf wrote:
| I'm a former research mathematician who worked for a little while
| in AI research, and this article matched up very well with my own
| experience with this particular cultural divide. Since I've spent
| a lot more time in the math world than the AI world, it's very
| natural for me to see this divide from the mathematicians'
| perspective, and I definitely agree that a lot of the people I've
| talked to on the other side of this divide don't seem to quite
| get what it is that mathematicians want from math: that the
| primary aim isn't really to find out _whether_ a result is true
| but _why_ it 's true.
|
| To be honest, it's hard for me not to get kind of emotional about
| this. Obviously I don't know what's going to happen, but I can
| imagine a future where some future model is better at proving
| theorems than any human mathematician, like the situation, say,
| chess has been in for some time now. In that future, I would
| still care a lot about learning why theorems are true --- the
| process of answering those questions is one of the things I find
| the most beautiful and fulfilling in the world --- and it makes
| me really sad to hear people talk about math being "solved", as
| though all we're doing is checking theorems off of a to-do list.
| I often find the conversation pretty demoralizing, especially
| because I think a lot of the people I have it with would probably
| really enjoy the thing mathematics actually is much more than the
| thing they seem to think it is.
| jasonhong wrote:
| Interestingly, the main article mentions Bill Thurston's paper
| "On Proof and Progress in Mathematics"
| (https://www.math.toronto.edu/mccann/199/thurston.pdf), but
| doesn't mention a quote from that paper that captures the
| essence of what you wrote:
|
| > "The rapid advance of computers has helped dramatize this
| point, because computers and people are very different. For
| instance, when Appel and Haken completed a proof of the 4-color
| map theorem using a massive automatic computation, it evoked
| much controversy. I interpret the controversy as having little
| to do with doubt people had as to the veracity of the theorem
| or the correctness of the proof. Rather, it reflected a
| continuing desire for human understanding of a proof, in
| addition to knowledge that the theorem is true."
|
| Incidentally, I've also a similar problem when reviewing HCI
| and computer systems papers. Ok sure, this deep learning neural
| net worked better, but what did we as a community actually
| learn that others can build on?
| nicf wrote:
| The Four Color Theorem is a great example! I think this story
| is often misrepresented as one where mathematicians _didn 't
| believe_ the computer-aided proof. Thurston gets the story
| right: I think basically everyone in the field took it as
| resolving the _truth_ of the Four Color Theorem --- although
| I don 't think this was really in serious doubt --- but in an
| incredibly unsatisfying way. They wanted to know what
| underlying pattern in planar graphs forces them all to be
| 4-colorable, and "well, we reduced the question to these tens
| of thousands of possible counterexamples and they all turned
| out to be 4-colorable" leaves a lot to be desired as an
| answer to that question. (This is especially true because the
| _Five_ Color Theorem does have a very beautiful proof. I
| reach at a math enrichment program for high schoolers on
| weekends, and the result was simple enough that we could get
| all the way through it in class.)
| troymc wrote:
| Another example akin to the proof of the 4-color map theorem
| was the proof of the Kepler conjecture [1], i.e. "Grocers
| stack their oranges in the densest-possible way."
|
| We "know" it's true, but only because a machine ground
| mechanically through lots of tedious cases. I'm sure most
| mathematicians would appreciate a simpler and more elegant
| proof.
|
| [1] https://en.wikipedia.org/wiki/Kepler_conjecture
| Henchman21 wrote:
| I've worked in tech my entire adult life and boy do I feel this
| deep in my soul. I have slowly withdrawn from the higher-level
| tech designs and decision making. I usually disagree with all
| of it. Useless pursuits made only for resume fodder. Tech
| decisions made based on the bonus the CTO gets from the vendors
| (Superbowl tickets anyone?) not based on the suitability of the
| tech.
|
| But absolutely worst of all is the arrogance. The hubris. The
| thinking that because some human somewhere has figured a thing
| out that its then just implicitly _known_ by these types. The
| casual disregard for their fellow humans. The lack of true care
| for anything and anyone they touch.
|
| Move fast and break things!! _Even when its the society you
| live in_.
|
| That arrogance and/or hubris is just another type of stupidity.
| bluefirebrand wrote:
| > Move fast and break things!! Even when its the society you
| live in.
|
| This is the part I don't get honestly
|
| Are people just very shortsighted and don't see how these
| changes are potentially going to cause upheaval?
|
| Do they think the upheaval is simply going to be worth it?
|
| Do they think they will simply be wealthy enough that it
| won't affect them much, they will be insulated from it?
|
| Do they just never think about consequences at all?
|
| I am trying not to be extremely negative about all of this,
| but the speed of which things are moving makes me think we'll
| hit the cliff before even realizing it is in front of us
|
| That's the part I find unnerving
| feoren wrote:
| > Do they think they will simply be wealthy enough that it
| won't affect them much, they will be insulated from it?
|
| Yes, partly that. Mostly they only care about their rank.
| Many people would burn down the country if it meant they
| could be king of the ashes. Even purely self-interested
| people should welcome a better society for all, because a
| rising tide lifts all boats. But not only are they selfish,
| they're also very stupid, at least in this aspect. They
| can't see the world as anything but zero sum, and
| themselves as either winning or losing, so they must win at
| all costs. And those costs are huge.
| brobdingnagians wrote:
| Reminds me of the Paradise Lost quote, "Better to rule in
| Hell, than serve in Heaven", such an insightful book for
| understanding a certain type of person from Milton.
| Beautiful imagery throughout too, highly recommend.
| Henchman21 wrote:
| > Do they just never think about consequences at all?
|
| Yes, I think this is it. Frequently using social media and
| being "online" leads to less critical thought, less
| thinking overall, smaller window that you allow yourself to
| think in, thoughts that are merely sound bites not fully
| fleshed out thoughts, and so on. Ones thoughts can easily
| become a milieu of memes and falsehoods. A person whose
| mind is in the state will do whatever anyone suggests for
| that next dopamine hit!
|
| I am guilty of it all myself which is how I can make this
| claim. I too fear for humanity's future.
| unsui wrote:
| I've called this out numerous times (and gotten downvoted
| regularly), with what I call the "Cult of Optimization"
|
| aka optimization-for-its-own-sake, aka pathological
| optimization.
|
| It's basically meatspace internalizing and adopting the
| paperclip problem as a "good thing" to pursue, screw
| externalities and consequences.
|
| And, lo-and-behold, my read for why it gets downvoted here
| is that a lot of folks on HN ascribe to this mentality, as
| it is part of the HN ethos to optimize , often
| pathologically.
| jmount wrote:
| Love your point. "Lack of alignment" affects more than
| just AIs.
| chasd00 wrote:
| Humans like to solve problems and be at the top of the
| heap. Such is life, survival of the fittest after all. AI
| is a problem to solve, whoever gets to AGI first will be at
| the top of the heap. It's a hard drive to turn off.
| bluefirebrand wrote:
| In theory this is actually pretty easy to "turn off"
|
| You flatten the heap
|
| You decrease or eliminate the reward for being at the top
|
| You decrease or eliminate the penalty for being at the
| bottom
|
| The main problem is that we haven't figured out a good
| way to do this without creating a whole bunch of other
| problems
| Dracophoenix wrote:
| > Are people just very shortsighted and don't see how these
| changes are potentially going to cause upheaval?
|
| > Do they think the upheaval is simply going to be worth
| it?
|
| All technology causes upheaval. We've benefited from many
| of these upheavals to the point where it's impossible for
| most to imagine a world without the proliferation of
| movable type, the internal combustion engine, the computer,
| or the internet. All of your criticisms could have easily
| been made word for word by the Catholic Church during the
| medieval era. The "society" of today is no more of a sacred
| cow than its antecedent incarnations were half a millenium
| ago. As history has shown, it must either adapt, disperse,
| or die.
| bluefirebrand wrote:
| > The "society" of today is no more of a sacred cow than
| its antecedent incarnations were half a millenium ago. As
| history has shown, it must either adapt, disperse, or die
|
| I am not concerned about some kind of "sacred cow" that I
| want to preserve
|
| I am concerned about a future where those with power no
| longer need 90% of the population so they deploy
| autonomous weaponry that grinds most of the population
| into fertilizer
|
| And I'm concerned there are a bunch of short sighted
| idiots gleefully building autonomous weaponry for them,
| thinking they will either be spared from mulching, or be
| the mulchers
|
| Edit: The thing about appealing to history is that it
| also shows that when upper classes get too powerful they
| start to lose touch with everyone else, and this often
| leads to turmoil that affects the common folk most
|
| As one of the common folk, I'm pretty against that
| andrewl wrote:
| Exactly. It was described in Chesterton's Fence:
|
| There exists in such a case a certain institution or law;
| let us say, for the sake of simplicity, a fence or gate
| erected across a road. The more modern type of reformer
| goes gaily up to it and says, "I don't see the use of this;
| let us clear it away." To which the more intelligent type
| of reformer will do well to answer: "If you don't see the
| use of it, I certainly won't let you clear it away. Go away
| and think. Then, when you can come back and tell me that
| you do see the use of it, I may allow you to destroy it."
| dkarl wrote:
| > But absolutely worst of all is the arrogance. The hubris.
| The thinking that because some human somewhere has figured a
| thing out that its then just implicitly known by these types.
|
| I worked in an organization afflicted by this and still have
| friends there. In the case of that organization, it was
| caused by an exaggerated glorification of management over
| ICs. Managers truly did act according to the belief, and show
| every evidence of sincerely believing in it, that their
| understanding of every problem was superior to the sum of the
| knowledge and intelligence of every engineer under them in
| the org chart, not because they respected their engineers and
| worked to collect and understand information from them, but
| because managers are a higher form of humanity than ICs, and
| org chart hierarchy reflects natural superiority. Every
| conversation had to be couched in terms that didn't
| contradict those assumptions, so the culture had an extremely
| high tolerance for hand-waving and BS. Naturally this created
| cover for all kinds of selfish decisions based on politics,
| bonuses, and vendor perks. I'm very glad I got out of there.
|
| I wouldn't paint all of tech with the same brush, though.
| There are many companies that are better, much better. Not
| because they serve higher ideals, but because they can't
| afford to get so detached from reality, because they'd fail
| if they didn't respect technical considerations and respect
| their ICs.
| dang wrote:
| I'm sure that many of us sympathize, but can you please
| express your views without fulminating? It makes a big
| difference to discussion quality, which is why this is in the
| site guidelines:
| https://news.ycombinator.com/newsguidelines.html.
|
| It's not just that comments that vent denunciatory feelings
| are lower-quality themselves, though usually they are. It's
| that they exert a degrading influence on the rest of the
| thread, for a couple reasons: (1) people tend to respond in
| kind, and (2) these comments always veer towards the generic
| (e.g. "lack of true care for anything and anyone", "just
| another type of stupidity"), which is bad for curious
| conversation. Generic stuff is repetitive, and indignant-
| generic stuff doubly so.
|
| By the time we get further downthread, the original topic is
| completely gone and we're into "glorification of management
| over ICs" (https://news.ycombinator.com/item?id=43346257).
| Veering offtopic can be ok when the tangent is even more
| interesting (or whimsical) than the starting point, but most
| tangents aren't like that--mostly what they do is replace a
| more-interesting-and-in-the-key-of-curiosity thing with a
| more-repetitive-and-in-the-key-of-indignation thing, which is
| a losing trade for HN.
| lordleft wrote:
| I'm not a mathematician so please feel free to correct me...but
| wouldn't there still be an opportunity for humans to try to
| understand why a proof solved by a machine is true? Or are you
| afraid that the culture of mathematics will shift towards being
| impatient about this sorts of questions?
| nicf wrote:
| Well, it depends on exactly what future you were imagining.
| In a world where the model just spits out a totally
| impenetrable but formally verifiable Lean proof, then yes,
| absolutely, there's a lot for human mathematicians to do. But
| I don't see any particular reason things would have to stop
| there: why couldn't some model also spit out nice, beautiful
| explanations of why the result is true? We're certainly not
| there yet, but if we do get there, human mathematicians might
| not really be producing much of anything. What reason would
| there be to keep employing them all?
|
| Like I said, I don't have any idea what's going to happen.
| The thing that makes me sad about these conversations is that
| the people I talk to sometimes don't seem to have any
| appreciation for the thing they say they want to dismantle.
| It might even be better for humanity on the whole to arrive
| in this future; I'm not arguing that one way or the other!
| Just that I think there's a chance it would involve losing
| something I really love, and that makes me sad.
| GPerson wrote:
| I don't think the advent of superintelligence will lead to
| increased leisure time and increased well-being / easier
| lives. However, if it did I wouldn't mind redundantly
| learning the mathematics with the help of the AI. It's
| intrinsically interesting and ultimately I don't care to
| impress anybody, except to the extent it's necessary to be
| employable.
| nicf wrote:
| I would love that too. In fact, I already spend a good
| amount of my free time redundantly learning the
| mathematics that was produced by humans, and I have fun
| doing it. The thing that makes me sad to imagine --- and
| again, this is not a prediction --- is the loss of the
| community of human mathematicians that we have right now.
| nonethewiser wrote:
| >But I don't see any particular reason things would have to
| stop there: why couldn't some model also spit out nice,
| beautiful explanations of why the result is true?
|
| Oh... I didnt anticipate this would bother you. Would it be
| fair to say that its not that you like understanding why
| its true, because you have that here, but that you like
| process of discovering why?
|
| Perhaps thats what you meant originally. But my
| understanding was that you were primarily just concerned
| with understanding why, not being the one to discover why.
| nicf wrote:
| This is an interesting question! You're giving me a
| chance to reflect a little more than I did when I wrote
| that last comment.
|
| I can only speak for myself, but it's not that I care a
| lot about me personally being the first one to discover
| some new piece of mathematics. (If I did, I'd probably
| still be doing research, which I'm not.) There is
| something very satisfying about solving a problem for
| yourself rather than being handed the answer, though,
| even if it's not an original problem. It's the same
| reason some people like doing sudokus, and why those
| people wouldn't respond well to being told that they
| could save a lot of time if they just used a sudoku
| solver or looked up the answer in the back of the book.
|
| But that's not really what I'm getting at in the sentence
| you're quoting --- people are still free to solve sudokus
| even though sudoku solvers exist, and the same would
| presumably be true of proving theorems in the world we're
| considering. The thing I'd be most worried about is the
| destruction of the community of mathematicians. If math
| were just a fun but useless hobby, like, I don't know,
| whittling or something, I think there would be way fewer
| people doing it. And there would be even fewer people
| doing it as deeply and intensely as they are now when
| it's their full-time job. And as someone who likes math a
| lot, I don't love the idea of that happening.
| mvieira38 wrote:
| That is kind of hard to do. Human reasoning and computer
| reasoning is very different, enough so that we can't really
| grasp it. Take chess, for example. Humans tend to reason in
| terms of positions and tactics, but computers just brute
| force it (I'm ignoring stuff like Alpha Zero because
| computers were way better than us even before that). There
| isn't much to learn there, so GMs just memorize the computer
| moves for so and so situation and then go back to their past
| heuristics after n moves
| Someone wrote:
| > so GMs just memorize the computer moves for so and so
| situation and then go back to their past heuristics after n
| moves
|
| I think they also adjust their heuristics, based on looking
| at thousands of computer moves.
| mcguire wrote:
| Many years ago I heard a mathematician speaking about some open
| problem and he said, "Sure, it's possible that there is a
| simple solution to the problem using basic techniques that
| everyone has just missed so far. And if you find that solution,
| mathematics will pat you on the head and tell you to run off
| and play.
|
| "Mathematics advances by solving problems using new techniques
| because those techniques open up new areas of mathematics."
| psunavy03 wrote:
| That seems like a justification that is right on the knife's
| edge of being a self-licking ice cream cone.
| jvans wrote:
| in poker AI solvers tell you what the optimal play is and it's
| your job to reverse engineer the principles behind it. It cuts
| a lot of the guess work out but there's still plenty of hard
| work left in understanding the why and ultimately that's where
| the skill comes in. I wonder if we'll see the same in math
| optimalsolver wrote:
| If the shortest proof for some theorem is several thousand
| pages long and beyond the ability of any biological mind to
| comprehend, would mathematicians not care about it?
|
| Which is to say, if you only concern yourself with theorems
| which have short, understandable proofs, aren't you cutting
| yourself off from vast swathes of math space?
| nicf wrote:
| Hm, good question. It depends on what you mean. If you're
| asking about restricting which theorems we try to prove, then
| we definitely _are_ cutting ourselves off from vast swathes
| of math space, and we 're doing it on purpose! The article
| we're responding to talks about mathematicians developing
| "taste" and "intuition", and this is what I think the author
| meant --- different people have different tastes, of course,
| but most conceivable true mathematical statements are ones
| that everyone would agree are completely uninteresting;
| they're things like "if you construct these 55 totally
| unmotivated mathematical objects that no one has ever cared
| about according to these 18 random made-up rules, then none
| of the following 301 arrangements are possible."
|
| If you're talking about questions that are well-motivated but
| whose _answers_ are ugly and incomprehensible, then a milder
| version of this actually happens fairly often --- some major
| conjecture gets solved by a proof that everyone agrees is
| right but which also doesn 't shed much light on why the
| thing is true. In this situation, I think it's fair to
| describe the usual reaction as, like, I'm definitely happy to
| have the confirmation that the thing is true, but I would
| much rather have a nicer argument. Whoever proved the thing
| in the ugly way definitely earns themselves lots of math
| points, but if someone else comes along later and proves it
| in a clearer way then they've done something worth
| celebrating too.
|
| Does that answer your question?
| meroes wrote:
| My take is a bit different. I only have a math undergrad and only
| worked as an AI trainer so I'm quite "low" on the totem pole.
|
| I have listened to colin Mclarty talk about philosophy of math
| and there _was_ a contingent of mathematicians who solely cared
| about solving problems via "algorithms". The time period was just
| preceding the modern math since the late 1800s roughly, where the
| algorithmists, intuitivists, and logical oriented mathematicians
| coalesced into a combination that includes intuitive,
| algorithmic, and importance of logic, leading to the modern way
| we do proofs and focus on proofs.
|
| These algorithmists didn't care about the so called "meaningless"
| operations that got an answer, they just cared they got useful
| results.
|
| I think the article mitigates this side of math, and is the side
| AI will be best or most useful at. Having read AI proofs, they
| are terrible in my opinion. But if AI can prove something useful
| even if the proof is grossly unappealing to the modern
| mathematician, there should be nothing to clamor about.
|
| This is the talk I have in mind
| https://m.youtube.com/watch?v=-r-qNE0L-yI&pp=ygUlQ29saW4gbWN...
| throw8404948k wrote:
| > This quest for deep understanding also explains a common
| experience for mathematics graduate students: asking an advisor a
| question, only to be told, "Read these books and come back in a
| few months."
|
| With AI advisor I do not have this problem. It explains parts I
| need, in a way I understand. If I study some complicated topic,
| AI shortens it from months to days.
|
| I was somehow mathematically gifted when younger, sadly I often
| reinvented my own math, because I did not even know this part of
| math existed. Watching how Deepseek thinks before answering, is
| REALLY beneficial. It gives me many hints and references. Human
| teachers are like black boxes while teaching.
| sarchertech wrote:
| I think you're missing the point of what the advisor is saying.
| throw8404948k wrote:
| No, I get it.
|
| My point is human advisor does not have enough time, to
| answer questions and correctly explain the subject. I may get
| like 4 hours a week, if lucky. Books are just a cheap
| substitute for real dialog and reasoning with a teacher.
|
| Most ancient philosophy papers were in form of dialog. It is
| much faster to explain things.
|
| AI is a game changer. It shortens feedback loop from a week
| to hour! It makes mistakes (as humans do), but it is faster
| to find them. And it also develops cognitive skills while
| finding them.
|
| It is like programming in low level C in notepad 40 years
| ago. Versus high level language with IDE, VCS, unit tests...
|
| Or like farming resources in Rust. Booring repetitive
| grind...
| WhyOhWhyQ wrote:
| Books aren't just a lower quality version of dialog with a
| person though. They operate entirely differently. With very
| few people can you think quietly for 30 minutes straight
| without talking, but with a book you can put it down and
| come back to it at will.
|
| I don't think professional programmers were using notepad
| in 1985. Here's talk of IDEs from an article from 1985:
| https://dl.acm.org/doi/10.1145/800225.806843 It mentions
| Xerox Development Environment, from 1977
| https://en.wikipedia.org/wiki/Xerox_Development_Environment
|
| The feedback loop for programming / mathematics / other
| things I've studied was not a week in the year 2019. In
| that ancient time the feedback look was maybe 10% slower
| than with any of these LLMs since you had to look at Google
| search.
| ohgr wrote:
| I suspect you probably don't understand it after that. You
| think you do.
|
| I thought I understood calculus until I realised I didn't. And
| that took a bit thwack in the face really. I could use it but I
| didn't understand it.
| m0llusk wrote:
| > The last mathematicians considered to have a comprehensive view
| of the field were Hilbert and Poincare, over a century ago.
|
| Henri Cartan of the Bourbaki had not only a more comprehensive
| view, but a greater scope of the potential of mathematical
| modeling and description
| coffeeaddict1 wrote:
| I would also add Grothendieck to that list.
| woah wrote:
| > Perhaps most telling was the sadness expressed by several
| mathematicians regarding the increasing secrecy in AI research.
| Mathematics has long prided itself on openness and transparency,
| with results freely shared and discussed. The closing off of
| research at major AI labs--and the inability of collaborating
| mathematicians to discuss their work--represents a significant
| cultural clash with mathematical traditions. This tension recalls
| Michael Atiyah's warning against secrecy in research:
| "Mathematics thrives on openness; secrecy is anathema to its
| progress" (Atiyah, 1984).
|
| Engineering has always involved large amounts of both math and
| secrecy, what's different now?
| bo1024 wrote:
| AI is undergoing a transition from academic _research_ to
| industry _engineering_.
|
| (But the engineers want the benefits of academic research --
| going to conferences to give talks, credibility, intellectual
| prestige -- without paying the costs, e.g. actually sharing new
| knowledge and information.)
| analog31 wrote:
| It involves math at a research level, but from what I've
| observed, people in industry with engineering job titles make
| relatively little use of math. They will frequently tell you
| with that sheepish smile: "Oh, I'm not really a math person."
| Students are told with great confidence by older engineers that
| they'll never use their college math after they graduate.
|
| Not exactly AI by today's standards, but a lot of the math that
| they need has been rolled into their software tools. And Excel
| is quite powerful.
| xg15 wrote:
| > _One question generated particular concern: what would happen
| if an AI system produced a proof of a major conjecture like the
| Riemann Hypothesis, but the proof was too complex for humans to
| understand? Would such a result be satisfying? Would it advance
| mathematical understanding? The consensus seemed to be that while
| such a proof might technically resolve the conjecture, it would
| fail to deliver the deeper understanding that mathematicians
| truly seek._
|
| I think this is an interesting question. In a hypothetical SciFi
| world where we somehow provably know that AI is infallible and
| the results are always correct, you could imagine mathematicians
| grudgingly accepting some conjecture as "proven by AI" even
| without understanding the why.
|
| But for real-world AI, we know it can produce hallucinations and
| its reasoning chains can have massive logical errors. So if it
| came up with a proof that no one understands, how would we even
| be able to verify that the proof is indeed correct and not just
| gibberish?
|
| Or more generally, how do you verify a proof that you don't
| understand?
| tech_ken wrote:
| > Or more generally, how do you verify a proof that you don't
| understand?
|
| This is the big question! Computer-aided proof has been around
| forever. AI seems like just another tool from that box. Albeit
| one that has the potential to provide 'human-friendly' answers,
| rather than just a bunch of symbolic manipulation that must be
| interpreted.
| oersted wrote:
| Serious theorem-proving AIs always write the proof in a formal
| syntax where it is possible to check that the proof is correct
| without issue. The most popular such formal language is Lean,
| but there are many others. It's just like having a coding AI,
| it may write some function and you check if it compiles. If the
| AI writes a program/proof in Lean, it will only compile if the
| proof is correct. Checking the correctness of proofs is a much
| easier problem than coming up with the proof in the first
| place.
| nybsjytm wrote:
| > Checking the correctness of proofs is a much easier problem
| than coming up with the proof in the first place.
|
| Just so this isn't misunderstood, not so much cutting-edge
| math is presently possible to code in lean. The famous
| exceptions (such as the results by Clausen-Scholze and
| Gowers-Green-Manners-Tao) have special characteristics which
| make them much more ground-level and easier to code in lean.
|
| What's true is that it's very easy to check if a lean-coded
| proof is correct. But it's hard and time-consuming to
| formulate most math as lean code. It's something many AI
| research groups are working on.
| zozbot234 wrote:
| > The famous exceptions (such as the results by Clausen-
| Scholze and Gowers-Green-Manners-Tao) have special
| characteristics which make them much more ground-level and
| easier to code in lean.
|
| "Special characteristics" is really overstating it. It's
| just a matter of getting all the prereqs formalized in Lean
| first. That's a bit of a grind to be sure, but the Mathlib
| effort for Lean has the bulk of the undergrad curriculum
| and some grad subjects formalized.
|
| I don't think AI will be all that helpful wrt. this kind of
| effort, but it might help in some limited ways.
| nicf wrote:
| oersted's answer basically covers it, so I'm mostly just
| agreeing with them: the answer is that you use a computer. Not
| another AI model, but a piece of regular, old-fashioned
| software that has much more in common with a compiler than an
| LLM. It's really pretty closely analogous to the question "How
| do you verify that some code typechecks if you don't understand
| it?"
|
| In this hypothetical Riemann Hypothesis example, the only thing
| the human would have to check is that (a) the proof-
| verification software works correctly, and that (b) the
| statement of the Riemann Hypothesis at the very beginning is
| indeed a statement of the Riemann Hypothesis. This is orders of
| magnitude easier than proving the Riemann Hypothesis, or even
| than following someone else's proof!
| kkylin wrote:
| As Feynman once said [0]: "Physics is like sex. Sure, it may give
| some practical results, but that's not why we do it." I don't
| think it's any different for mathematics, programming, a lot of
| engineering, etc.
|
| I can see a day might come when we (research mathematicians, math
| professors, etc) might not exist as a profession anymore, but
| there will continue to be mathematicians. What we'll do to make a
| living when that day comes, I have no idea. I suspect many others
| will also have to figure that out soon.
|
| [0] I've seen this attributed to the Character of Physical Law
| but haven't confirmed it
| tech_ken wrote:
| Mathematics is, IMO, not the axioms, proofs, or theorems. It's
| the human process of organizing these things into conceptual
| taxonomies that appeal to what is ultimately an aesthetic
| sensibility (what "makes sense"), updating those taxonomies as
| human understanding and aesthetic preferences evolve, as well as
| practical considerations ('application'). Generating proofs of a
| statement is like a biologist identifying a new species, critical
| but also just the start of the work. It's the macropatterns
| connecting the organisms that lead to the really important
| science, not just the individual units of study alone.
|
| And it's not that AI can't contribute to this effort. I can
| certainly see how a chatbot research partner could be super
| valuable for lit review, brainstorming, and even 'talking things
| through' (much like mathematicians get value from talking aloud).
| This doesn't even touch on the ability to generate potentially
| valid proofs, which I do think has a lot of merit. But the idea
| that we could totally outsource the work to a generative model
| seems impossible by definition. The point of the labor is develop
| _human_ understanding, removing the human from the loop changes
| the nature of the endeavor entirely (basically to algorithm
| design).
|
| Similar stuff holds about art (at a high level, and glossing over
| 'craft art'); IMO art is an expressive endeavor. One person
| communicating a hard-to-express feeling to an audience. GenAI can
| obviously create really cool pictures, and this can be grist for
| art, but without some kind of mind-to-mind connection and empathy
| the picture is ultimately just an artifact. The human context is
| what turns the artifact into art.
| EigenLord wrote:
| Is it really a culture divide or is it an economic incentives
| divide? Many AI researchers _are_ mathematicians. Any theoretical
| AI research paper will typically be filled with eye-wateringly
| dense math. AI dissolves into math the closer you inspect it. It
| 's math all the way down. What differs are the incentives. Math
| rewards openness because there's no real concept of a
| "competitive edge", you're incentivized to freely publish and
| share your results as that is how you get recognition and
| hopefully a chance to climb the academic ladder. (Maybe there
| might be a competitive spirit between individual mathematicians
| working on the same problems, but this is different than systemic
| market competition.) AI is split between being a scientific and
| capitalist pursuit; sharing advances can mean the difference
| between making a fortune or being outmaneuvered by competitors.
| It contaminates the motives. This is where the AI researcher's
| typical desire for "novel results" comes from as well, they are
| inheriting the values of industry to produce economic
| innovations. It's a tidier explanation to tie the culture
| differences to material motive.
| nybsjytm wrote:
| > Many AI researchers are mathematicians. Any theoretical AI
| research paper will typically be filled with eye-wateringly
| dense math. AI dissolves into math the closer you inspect it.
| It's math all the way down.
|
| There is a major caveat here. Most 'serious math' in AI papers
| is wrong and/or irrelevant!
|
| It's even the case for famous papers. Each lemma in Kingma and
| Ba's ADAM optimization paper is wrong, the geometry in McInnes
| and Healy's UMAP paper is mostly gibberish, etc...
|
| I think it's pretty clear that AI researchers (albeit surely
| with some exceptions) just don't know how to construct or
| evaluate a mathematical argument. Moreover the AI community (at
| large, again surely with individual exceptions) seems to just
| have pretty much no interest in promoting high intellectual
| standards.
| zipy124 wrote:
| I'd be interested to read about the gibberish in UMAP, I know
| the paper "An improvement of the convergence proof of the
| ADAM-Optimizer" for the lemma problem in the original ADAM
| but hadn't heard of the second one. Do you have any further
| info on it?
| mcguire wrote:
| Fundamentally, mathematics is about understanding why something
| is true or false.
|
| Modern AI is about "well, it looks like it works, so we're
| golden".
| nothrowaways wrote:
| You can't fake influence
| Sniffnoy wrote:
| > As Gauss famously said, there is "no royal road" to
| mathematical mastery.
|
| This is not the point, but the saying "there is no royal road to
| geometry" is far older than Gauss! It goes back at least to
| Proclus, who attributes it to Euclid.
| troymc wrote:
| I never understood that quote until recently.
|
| The story goes that the (royal) pharaoh of Egypt wanted to
| learn geometry, but didn't want to have to read Euclid. He
| wanted a faster route. But, "there is no royal road to
| geometry."
| NooneAtAll3 wrote:
| I feel like this rumbling can be summarized as "Ai is
| engineering, not math" - and suddenly a lot of things make sense
|
| Why Ai field is so secretive? Because it's all trade secrets -
| and maybe soon to become patents. You don't give away precisely
| how semiconductor fabs work, only base research level of "this
| direction is promising"
|
| Why everyone is pushed to add Ai in? Because that's where the
| money is, that's where the product is.
|
| Why Ai needs results fast? Because it's production line, and you
| create and design stuff
|
| Even the core distinction mentioned - that Ai is about
| "speculation and possibility" - that's all about tool
| experimenting and prototyping. It's all about building and
| constructing. Aka Engineering/Technology letters of STEM
|
| I guess next step is to ask "what to do next?". IMO, math and Ai
| fields should realise the divide and slowly diverge, leaving each
| other alone on an arm's length. Just as engineers and programmers
| (not computer scientists) already do
| umutisik wrote:
| If AI can prove major theorems, it will likely by employing
| similar heuristics as the mathematical community employs when
| searching for proofs and understanding. Studying AI-generated
| proofs, with the help of AI to decipher contents will help humans
| build that 'understanding' if that is desired.
|
| An issue in these discussions is that mathematics is both an art,
| a sport, and a science. And the development of AI that can build
| 'useful' libraries of proven theorems means different things for
| each. The sport of mathematics will be basically over. The art of
| mathematics will thrive as it becomes easier to explore the
| mathematical world. For the science of mathematics, it's hard to
| say, it's been kind of shaky for ~50 years anyway, but it can
| only help.
| tylerneylon wrote:
| I agree with the overt message of the post -- AI-first folks tend
| to think about getting things working, whereas math-first people
| enjoy deeply understood theory. But I also think there's
| something missing.
|
| In math, there's an urban legend that the first Greek who proved
| sqrt(2) is irrational (sometimes credited to Hippasus of
| Metapontum) was thrown overboard to drown at sea for his
| discovery. This is almost certainly false, but it does capture
| the spirit of a mission in pure math. The unspoken dream is this:
|
| ~ "Every beautiful question will one day have a beautiful
| answer."
|
| At the same time, ever since the pure and abstract nature of
| Euclid's Elements, mathematics has gradually become a more
| diverse culture. We've accepted more and more kinds of "numbers:"
| negative, irrational, transcendental, complex, surreal,
| hyperreal, and beyond those into group theory and category
| theory. Math was once focused on measurement of shapes or
| distances, and went beyond that into things like graph theory and
| probabilities and algorithms.
|
| In each of these evolutions, people are implicitly asking the
| question:
|
| "What is math?"
|
| Imagine the work of introducing the sqrt() symbol into ancient
| mathematics. It's strange because you're defining a symbol as
| answering a previously hard question (what x has x^2=something?).
| The same might be said of integration as the opposite of a
| derivative, or of sine defined in terms of geometric questions.
| Over and over again, new methods become part of the canon by
| proving to be both useful, and in having properties beyond their
| definition.
|
| AI may one day fall into this broader scope of math (or may
| already be there, depending on your view). If an LLM can give you
| a verified but unreadable proof of a conjecture, it's still true.
| If it can give you a crazy counterexample, it's still false. I'm
| not saying math should change, but that there's already a nature
| of change and diversity within what math is, and that AI seems
| likely to feel like a branch of this in the future; or a close
| cousin the way computer science already is.
| tylerneylon wrote:
| PS After I wrote my comment, I realized: of course, AI could
| one day get better at the things that make it not-perfect in
| pure math today:
|
| * AI could get better at thinking intuitively about math
| concepts. * AI could get better at looking for solutions people
| can understand. * AI could get better at teaching people about
| ideas that at first seem abstruse. * AI could get better at
| understanding its own thought, so that progress is not only a
| result, but also a method for future progress.
___________________________________________________________________
(page generated 2025-03-12 23:00 UTC)