Post APhCUvBT4G2PDAETRo by jaztrophysicist@astrodon.social
 (DIR) More posts by jaztrophysicist@astrodon.social
 (DIR) Post #APhCUqddy4bZ76NPAe by jaztrophysicist@astrodon.social
       2022-11-17T10:11:25Z
       
       0 likes, 0 repeats
       
       Curious about what is going to be the community reaction when some groups start putting forward big discovery claims based on deep learning analyses. How much trust are we supposed to put in such results ?https://arxiv.org/abs/2211.09112
       
 (DIR) Post #APhCUr9Y3S7qi3Ists by j_bertolotti@mathstodon.xyz
       2022-11-17T10:22:47Z
       
       0 likes, 0 repeats
       
       @jaztrophysicist I am from a completely different community, but IMHO the big decider is how easy it is to independently verify the claim. If you discover something via ML, and this something can be checked to be true, then it is no different than stumbling on the right answer by chance (which is 100% ok). If your claim is impossible to verify, then you don't have a claim, you just have a wild guess.
       
 (DIR) Post #APhCUrYMZByVx0uhZw by jaztrophysicist@astrodon.social
       2022-11-17T10:31:11Z
       
       0 likes, 0 repeats
       
       @j_bertolotti Subject I linked to is precisely the kind of major question that has been hard to study by several independents means, & detection of which has been plagued with all kinds of issues and controversies. Data extrapolation in this context looks like a particularly risky business...There will be several detection efforts in the next decade, but then I fail to see how DL significantly improves anything fundamentally if need to be backed up by classic independent stat analysis anyways.
       
 (DIR) Post #APhCUs2qjqMTTZB368 by j_bertolotti@mathstodon.xyz
       2022-11-17T10:46:52Z
       
       0 likes, 0 repeats
       
       @jaztrophysicist ML might provide a good initial guess on where to start looking. But I am highly sceptical of any ML-based result that can't be easily verified.
       
 (DIR) Post #APhCUsWcx8BGxv6pVo by pkoppenburg@sciencemastodon.com
       2022-11-17T11:51:14Z
       
       0 likes, 0 repeats
       
       @j_bertolotti @jaztrophysicist There have been multiple attempts at throwing LHC data at ML trained on SM simulation. I would not scream new physics if an anomaly was found but would investigate the anomalous signature further.
       
 (DIR) Post #APhCUsxDMHRqINY3xA by jaztrophysicist@astrodon.social
       2022-11-17T11:58:27Z
       
       0 likes, 0 repeats
       
       @pkoppenburg @j_bertolotti The point is, if the DL thing is introduced precisely to fill in a hole/circumvent a fundamental limitation in the data that normally prevents you from drawing conclusions using standard methods, it seems to me that there is by construction no alternative way to investigate (since the DL was used in the first place to circumvent what prevented you from doing a proper analysis in the first place).
       
 (DIR) Post #APhCUtY59CwG8inVQ0 by wesselvalk@mastodon.social
       2022-11-17T12:07:55Z
       
       0 likes, 0 repeats
       
       @jaztrophysicist @pkoppenburg @j_bertolotti in that case, if there is no alternative, DL still is just an interpolation / extrapolation.
       
 (DIR) Post #APhCUtyJZfvFS54SJ6 by wesselvalk@mastodon.social
       2022-11-17T12:08:48Z
       
       0 likes, 0 repeats
       
       @jaztrophysicist @pkoppenburg @j_bertolotti Deep learning == fitting some clever function. Nothing more.
       
 (DIR) Post #APhCUuOtypBomXVgkS by jaztrophysicist@astrodon.social
       2022-11-17T12:12:30Z
       
       0 likes, 0 repeats
       
       @wesselvalk @pkoppenburg @j_bertolotti Of course I agree with that personally. My question was not really ingenuous, and has more to do with how a community as a whole will react in this case. Are institutions going to pretend everything is fine and communicate on a major discovery in that case ?
       
 (DIR) Post #APhCUumeYWBjyCcelk by wesselvalk@mastodon.social
       2022-11-17T12:19:20Z
       
       0 likes, 0 repeats
       
       @jaztrophysicist @pkoppenburg @j_bertolotti I put my money on the community reacting to this just as to any reconstruction method.
       
 (DIR) Post #APhCUvBT4G2PDAETRo by jaztrophysicist@astrodon.social
       2022-11-17T12:22:40Z
       
       0 likes, 0 repeats
       
       @wesselvalk @pkoppenburg @j_bertolotti So you think this will normalised (and therefore accepted in the mainstream as a proper discovery ?)
       
 (DIR) Post #APhCUvkuwSOUz6omhc by franco_vazza@mastodon.social
       2022-11-17T12:58:14Z
       
       0 likes, 0 repeats
       
       @jaztrophysicist @wesselvalk @pkoppenburg @j_bertolotti My cheap take is that I would "believe" discoveries done through ML only when they can be validated with more traditional approach.  Which probably makes me extremely out of fashion, despite my teenager spirit.
       
 (DIR) Post #APhCUw8JXT6q9flTAe by jaztrophysicist@astrodon.social
       2022-11-17T13:04:41Z
       
       0 likes, 0 repeats
       
       @franco_vazza @wesselvalk @pkoppenburg @j_bertolotti So in another reply my colleague @FredericPaletou posted a video of seminar by Alain Connes, and during the Q&A he is asked about ML. He takes the perfect example of proving the Riemann hypothesis. How do you react if ML tells you it's true and the mathematicians are still feeling hopeless with proving it with their traditional methods ?
       
 (DIR) Post #APhCUwVi8TpBKEi9dg by j_bertolotti@mathstodon.xyz
       2022-11-17T13:16:29Z
       
       0 likes, 0 repeats
       
       @jaztrophysicist @franco_vazza @wesselvalk @pkoppenburg @FredericPaletou If ML comes up with a counterexample for the Riemann hypothesis we can check that it is indeed a counterexample and thus the theorem as stated is false. If ML just says "it is true", it is just another bit of entropy added to the universe.
       
 (DIR) Post #APhCUwvEbaF0bOeXQG by tiago@social.skewed.de
       2022-11-17T13:17:47Z
       
       0 likes, 0 repeats
       
       @j_bertolotti @jaztrophysicist @franco_vazza @wesselvalk @pkoppenburg @FredericPaletou Oh, it would be more than one bit of entropy added...