[HN Gopher] Feynman vs. Computer
___________________________________________________________________
Feynman vs. Computer
Author : cgdl
Score : 51 points
Date : 2025-12-04 16:03 UTC (6 hours ago)
(HTM) web link (entropicthoughts.com)
(TXT) w3m dump (entropicthoughts.com)
| eig wrote:
| What is the advantage of this Monte Carlo approach over a typical
| numerical integration method (like Runge-Kutta)?
| MengerSponge wrote:
| Typical numerical methods are faster and way cheaper for the
| same level of accuracy in 1D, but it's trivial to integrate
| over a surface, volume, hypervolume, etc. with Monte Carlo
| methods.
| jgalt212 wrote:
| The writer would have been well served to discuss why he
| chose Monte Carlo over than summing up all the small
| trapezoids.
| adrianN wrote:
| At least if you can sample the relevant space reasonably
| accurately, otherwise it becomes really slow.
| kens wrote:
| I was wondering the same thing, but near the end, the article
| discusses using statistical techniques to determine the
| standard error. In other words, you can easily get an idea of
| the accuracy of the result, which is harder with typical
| numerical integration techniques.
| ogogmad wrote:
| Numerical integration using interval arithmetic gets you the
| same thing but in a completely rigorous way.
| edschofield wrote:
| Numerical integration methods suffer from the "curse of
| dimensionality": they require exponentially more points in
| higher dimensions. Monte Carlo integration methods have an
| error that is independent of dimension, so they scale much
| better.
|
| See, for example,
| https://ww3.math.ucla.edu/camreport/cam98-19.pdf
| a-dub wrote:
| as i understand: numerical methods -> smooth out noise from
| sampling/floating point error/etc for methods that are
| analytically inspired that are computationally efficient where
| monte carlo -> computationally expensive brute force random
| sampling where you can improve accuracy by throwing more
| compute at the problem.
| JKCalhoun wrote:
| As a hobbyist, I'm playing with analog computer circuits right
| now. If you can match your curve with a similar voltage profile,
| a simple analog integrator (an op-amp with a capacitor connected
| in feedback) will also give you the area under the curve (also as
| a voltage of course).
|
| Analog circuits (and op-amps just generally) are surprising cool.
| I know, kind of off on a tangent here but I have _integration_ on
| the brain lately. You say "4 lines of Python", and I say "1 op-
| amp".)
| dreamcompiler wrote:
| Yep. This is also how you solve differential equations with
| analog computers. (You need to recast them as integral
| equations because real-world differentiators are not well-
| behaved, but it still works.)
|
| https://i4cy.com/analog_computing/
| ogogmad wrote:
| How does this compare to the Picard-Lindelof theorem and the
| technique of Picard iteration?
| addaon wrote:
| One of my favorite circuits from Korn & Korn [0] is an
| implementation of an arbitrary function of a single variable.
| Take an oscilloscope-style display tube. Put your input on the
| X axis as a deflection voltage. Close a feedback loop on the Y
| axis with a photodiode, and use the Y axis deflection voltage
| as your output. Cut your function of one variable out of
| cardboard and tape to the front of the tube.
|
| [0] https://www.amazon.com/Electronic-Analog-Computers-
| D-c/dp/B0...
| bananaflag wrote:
| > I hear that in electronics and quantum dynamics, there are
| sometimes integrals whose value is not a number, but a function,
| and knowing that function is important in order to know how the
| thing it's modeling behaves in interactions with other things.
|
| I'd be interested in this. So finding classical closed form
| solutions is the actual thing desired there?
| morcus wrote:
| I think what the author was alluding to was the path integral
| formulation [of quantum mechanics] which was advanced in large
| part by Feynman.
|
| It's not that finding closed form solutions is what matters (I
| don't think most path integrals would have closed form
| solutions), but that the integration is done over the space of
| functions, not over Euclidian space (or a manifold in Euclidian
| space, etc...)
| messe wrote:
| An integral trick I picked up from a lecturer at university: if
| you know the result has to be of the form ax^n for some a that's
| probably rational and some integer n but you're feeling really
| lazy and/or it's annoying to simplify (even for mathematica),
| just plug in a transcendental value for x like Zeta[3].
|
| Then just divide by powers of that irrational number until you
| have something that looks rational. That'll give you a and n.
| It's more or less numerical dimensional analysis.
|
| It's not that useful for complicated integrals, but when you're
| feeling lazy it's a fucking godsend to know what the answer
| should be before you've proven it.
|
| EDIT: s/irrational/transcendental/
| Animats wrote:
| Good numerical integration is easy, because summing smooths out
| noise. Good numerical differentiation is hard, because noise is
| amplified.
|
| Conversely, good symbolic integration is hard, because you can
| get stuck and have to try another route through a combinatoric
| maze. Good symbolic differentiation is easy, because just
| applying the next obvious operation usually converges.
|
| Huh.
|
| Mandatory XKCD: [1]
|
| [1] https://xkcd.com/2117/
| kkylin wrote:
| That's exactly right. A couple more things:
|
| - Differenting a function composed of simpler pieces always
| "converges" (the process terminates). One just applies the
| chain rule. Among other things, this is why automatic
| differentiation is a thing.
|
| - If you have an analytic function (a function expressible
| locally as a power series), a surprisingly useful trick is to
| turn differentiation into integration via the Cauchy integral
| formula. Provided a good contour can be found, this gives a
| nice way to evaluate derivatives numerically.
| ogogmad wrote:
| The usage of confidence intervals here reminds me of the clearest
| way to see that integration is a computable operator, to the same
| degree that a function like sin() or sqrt() is computable. It's
| true thanks to a natural combination of (i) interval arithmetic
| and (ii) the "Darboux integral" approach to defining integration.
| So, intervals can do magic.
| 8bitsrule wrote:
| Cool how the computer versions seem to work well _as long as re-
| normalization isn 't involved_.
| ForOldHack wrote:
| I would bet on Feynman any day of the week. Numerical methods
| came up in 'Hidden Figures' and her solution was to use Euler to
| move from a elliptical orbit to a parabolic descent.
___________________________________________________________________
(page generated 2025-12-04 23:00 UTC)