[HN Gopher] Can a single AI model advance any field of science?
___________________________________________________________________
Can a single AI model advance any field of science?
Author : LAsteNERD
Score : 33 points
Date : 2025-04-22 19:02 UTC (3 hours ago)
(HTM) web link (www.lanl.gov)
(TXT) w3m dump (www.lanl.gov)
| Tycho wrote:
| Should be possible to backtest by training LLMs on historic
| datasets and then probing them to see if they can re-discover
| things that were discovered after their training data cut-off.
| What sort of prompts could push them to make a breakthrough.
| Q6T46nT668w6i3m wrote:
| It'd be tricky to avoid inadvertently leaking in the prompt
| since many discoveries seem obvious in retrospect.
| monoid73 wrote:
| exactly. hindsight bias makes it really hard to separate
| genuine inference from subtle prompt leakage. even framing
| the question can accidentally steer it toward the right
| answer. would be interesting to try with completely synthetic
| problems first just to test the method.
| parpfish wrote:
| Maybe you could do it with math research?
|
| First, give it the abstract for a fresh paper that it
| couldn't have been trained on, then see if it can come up
| with the same proofs to see if it can replicate the logic
| knowing the conclusion.
|
| Second, you could give it all the papers cited in the intro
| and ask a series of leading questions like "based on this
| work, what new results can you drive"?
| thorum wrote:
| I think that's an opportunity, not a problem. If prompt +
| hint generates a verifiable solution then you can build
| systems that propose hints, either randomly or by exploring a
| search space, and keep trying combinations until you hit on
| something that works.
| badgersnake wrote:
| AlphaFold already did. Or do we only count AI if it's an LLM now?
| analog31 wrote:
| And going even further, curve fitting.
| grunder_advice wrote:
| Is this a recruiting attempt by Los Alamos? AI/ML for science as
| this broad field used to be known is interesting. Some five years
| ago there as a real craze where every STEM lab at my university
| was doing some form of ML project. I think by now people have
| learned what works and what doesn't. Climate models for example
| have been quite successful. Possibly the reason is that they
| learn directly from collected data, rather than trying to emulate
| the output of simulations. Attempts to build similiar models for
| fluid dynamics have been rather dismal. In general, big models
| and big data result in useful models, even if only because these
| models seem to be somehow interpolating based on similiar
| training data points. Trying to replace classical physics based
| models with ML models trained on simulation data does not seem to
| work. The model is only ever capable of emulating a physically
| plausible output when the input is close enough to the training
| data, and that too, only when the system isn't chaotic. For
| applications where you are generating a sample to be used in a
| downstream task, ML models trained on lots of data can be very
| useful. You only need a few lucky guesses, that you can verify
| downstream, to end up with some useful result. In short, there is
| no magic to it. It's a useful tool that can be regarded as both a
| search algorithm and an optimization algorithm.
| season2episode3 wrote:
| Check out Fourier Neural Operators, they claim to have a pretty
| solid solver for fluid flow equations (Navier Stokes etc).
| grunder_advice wrote:
| I am already acquainted with them but to be honest, I am no
| longer in the field so I am not able to comment on latest
| developments. However, as of two years ago, the consistent
| result was that you could get models that reproduce really
| good physics for problems in the same physical regimes as the
| training data, but such models had poor generalizability, so
| depending on the use case, they weren't of much use. The only
| exception I know is FourCastNet, which is a weather model FNO
| from NVIDIA.
| raddan wrote:
| I think an important question to ask is whether your scientific
| task is primarily one of interpolation, or one of
| extrapolation. LLMs appear to be excellent interpolators. They
| are bad at extrapolation.
| immibis wrote:
| Climate models aren't LLMs.
| da_chicken wrote:
| They're also not AI.
|
| It remains to be seen exactly how much a climate model can
| be improved by AI. They're already based on woefully sparse
| data points.
| LinuxAmbulance wrote:
| Bit short on details other than "Let's see what LLMs can predict
| when we train them on various scientific data sets."
|
| Certainly a good thing to try, but the article feels like a PR
| piece more than anything else, as it's not answering anything,
| just giving a short overview of a few things they're trying with
| no data on those things whatsoever.
|
| It does fit in with the "Throw LLM spaghetti at a wall and see
| what sticks" trend these days though.
| bzmrgonz wrote:
| I think our creativity has not yet been duplicated in AI, so for
| maximum results, we need to pair AI with a human expert or a
| panel of human experts and innovate by committee. AI brings to
| the table vast memory, instant recall and most importantly,
| tired-less pursuit and the human element can provide creative
| guidance and prompt. The trick is in curating the BOK(body of
| knowledge) used to train GENERATIVE AI. I wonder what a curricula
| designed specifically for AI would look like?
| caseyy wrote:
| Yes. ML has advanced many fields related to modelling -
| meteorology, climate, molecular. Classification models have done
| much for genomics, particle physics, and other fields where
| experiments produce inhumane amounts of data.
|
| DeepVariant, Enformer, ParticleNet, DeepTau, etc. are some well-
| known individual models that are advanced branches of science.
| And there are the very famous ones, like AlphaFold (Nobel in
| Chemistry 2024).
|
| We need to think of AI not as a product (chats, agents, etc.),
| but as neural nets (AlexNet). Unfortunately, large companies are
| "chat-washing" these tremendously useful technologies.
| janalsncm wrote:
| ML was used to sharpen the recent image of a black hole:
| https://physics.aps.org/articles/v16/63
|
| ML is more of a bag of techniques that can be applied to many
| things than a pure domain. Of course you can study the
| properties of neural networks for their own sake but it's more
| common as a means to an end.
| klysm wrote:
| Surely they mean LLMs
| janalsncm wrote:
| I would be interested in machine learning for scientific
| research. Something more "physical" than optimizing software.
|
| I checked some of the nuclear fusion startups and didn't see
| anything.
___________________________________________________________________
(page generated 2025-04-22 23:00 UTC)