Post AWiyimmz3AzLUGWOJ6 by aja@mathstodon.xyz
 (DIR) More posts by aja@mathstodon.xyz
 (DIR) Post #AWixzFY9mmkdMmsBlY by niconiconi@mk.absturztau.be
       2023-06-15T18:49:26.104Z
       
       0 likes, 0 repeats
       
       Months ago I read a monograph on "measurement uncertainty" by a domain expert because I wanted to know that if a voltmeter is accurate within "5%", what does it really mean. After flipping the book page by page and closing the book, I'm still uncertain about it. All I learned is that the experts are still arguing with each other on the theoretical foundations of measurement. #electronics
       
 (DIR) Post #AWiyYTD4zRaNrroC80 by niconiconi@mk.absturztau.be
       2023-06-15T18:55:48.084Z
       
       0 likes, 0 repeats
       
       According to this author, the current theoretical foundation of measurement has two problems. The first is that traditional statistics focuses on knowing something by sampling the entire population. But measurement focuses on knowing something by sampling exactly the same thing many times. So the math and concepts from a statistics textbook has to be used in an adapted form. Another problem is that GUM, the current internationally-agreed framework of measurement uncertainty by the BIPM, has some foundational issues due to mixing incompatible frequentists and Bayesian ideas in the same framework. In GUM, the traditional concept of "error" is deprecated in favor of "uncertainty" because "error" is defined in terms of a "true value", which is often unknowable. The only thing you can know is the degree of uncertainty in a measurement result, not its "error". So this is kind of like the idea of subjective probability (another reason is that because "error" is connected with old-fashioned methods of error analysis), yet, the author argued, GUM's mathematical treatment is mostly a test-and-proven frequentist system, so in a sense it's self-contradictory. #electronics
       
 (DIR) Post #AWiyimmz3AzLUGWOJ6 by aja@mathstodon.xyz
       2023-06-15T18:56:46Z
       
       0 likes, 0 repeats
       
       @niconiconi My students sometimes struggle with precision vs. accuracy.
       
 (DIR) Post #AWiyinSSYyAJYtvVxI by niconiconi@mk.absturztau.be
       2023-06-15T18:57:37.736Z
       
       0 likes, 0 repeats
       
       @aja@mathstodon.xyz That's the easy stuff. This book discusses theoretical issues over the very concept of "accuracy" itself.
       
 (DIR) Post #AWizdpeR56m4s0j0Xg by niconiconi@mk.absturztau.be
       2023-06-15T19:07:58.510Z
       
       0 likes, 0 repeats
       
       So statisticians talk about "confidence interval" and experimenters talk about "uncertainty interval", these two concepts are actually not really equivalent, the interpretations of these two kinds of error bars are actually slightly different. In measurement, the "repeat" in "repeating it infinitely many times" needs a slightly relaxed interpretation than the strict frequentist one. But I still cannot understand the arcane details so far... #electronics
       
 (DIR) Post #AWj04TTDepOLtcBNPk by astrid@fedi.astrid.tech
       2023-06-15T19:09:58.113804Z
       
       0 likes, 0 repeats
       
       @niconiconi I mean isn't that just what the metric system people do? like they go to insane lengths to be like "hm the meter is this many cesium vibes, the kg is 1mol of whatever"
       
 (DIR) Post #AWj04UQm5P1wsK2wF6 by niconiconi@mk.absturztau.be
       2023-06-15T19:12:46.351Z
       
       0 likes, 0 repeats
       
       @astrid@fedi.astrid.tech Not that kind of physical foundation, that would be an easy issue for the scientists and engineers. The problem here is the statistical-philosophical kind of foundation (think "frequentist vs Bayesian" - which is only one of the many problems discussed in the book).
       
 (DIR) Post #AWj0ixwllWQY07JPwu by astrid@fedi.astrid.tech
       2023-06-15T19:14:06.646099Z
       
       0 likes, 0 repeats
       
       @niconiconi ah, right, like they go into the whole epistemology problem
       
 (DIR) Post #AWj0iyYLVoU7setQWG by niconiconi@mk.absturztau.be
       2023-06-15T19:20:05.762Z
       
       0 likes, 0 repeats
       
       @astrid@fedi.astrid.tech Exactly. One problem is that traditional statistics deals with knowing something by sampling the entire population. But measurement deals with knowing something by sampling exactly the same thing, possibly the same instrument in the same lab. So you can't just copy things from a statistics textbook without some modifications, and the very interpretation of probability distribution is problematic. Then the author started a lengthy treatment at both epistemological and mathematical levels.So I still don't know the answer of "what does it really mean when a voltmeter is 5% accurate" after closing the book.
       
 (DIR) Post #AWj1fWkaWb9yjoXSZU by wakame@tech.lgbt
       2023-06-15T19:20:30Z
       
       0 likes, 0 repeats
       
       @astrid @niconiconi 1. For dumb people like me, a simple explanation of "bayesian vs. frequentist" (sorry for medium):https://towardsdatascience.com/statistics-are-you-bayesian-or-frequentist-4943f953f21b2. Frequentists are weird.
       
 (DIR) Post #AWj1fXZdSnzdI8QE8e by niconiconi@mk.absturztau.be
       2023-06-15T19:30:41.717Z
       
       0 likes, 0 repeats
       
       @wakame@tech.lgbt @astrid@fedi.astrid.tech The problem here is that frequentist approaches work nicely within the pre-existing mathematical machinery that form the bulk of classical statistics with a proven history of problem-solving history, so it's too useful to ignore. Another problem is that Bayesian approaches also have many unique problems that can be problematic in their own ways for practical applications. The principle of maximum entropy? Statements dreamed up by the utterly deranged.