[HN Gopher] Polynomial Interpolation
___________________________________________________________________
Polynomial Interpolation
Author : atan2
Score : 42 points
Date : 2023-02-08 11:05 UTC (11 hours ago)
(HTM) web link (cohost.org)
(TXT) w3m dump (cohost.org)
| sritchie wrote:
| Here's some Clojure code I wrote for the Emmy computer algebra
| system that implements polynomial interpolation with a few
| different algorithms described in Numerical Recipes:
|
| https://github.com/mentat-collective/emmy/blob/main/src/emmy...
|
| I discovered while writing this that I could express each of
| these algorithms as folds that consumed a stream of points,
| accumulating a progressively higher order polynomial.
|
| Here's the same sort of thing but for rational function
| interpolation: https://github.com/mentat-
| collective/emmy/blob/main/src/emmy...
| foooobaba wrote:
| Related: Fast Fourier Transform (FFT) can be used for polynomial
| interpolation in O(nlogn) time (if you get to choose the points,
| and use "complex roots of unity"), where n is the degree of the
| polynomial, and is used as a step in multiplication of
| polynomials in O(nlogn) as well.
|
| This video covers it well: https://youtu.be/h7apO7q16V0
|
| Also, chapter 30 of CLRS:
| https://elec3004.uqcloud.net/ebooks/CLRS%20(2nd%20Ed)%20-%20...
| selimthegrim wrote:
| Just as a side note this is chapter 32 in CLR for those of you
| not with the times.
| yesenadam wrote:
| "CLRS" apparently refers to
|
| https://en.wikipedia.org/wiki/Introduction_to_Algorithms
|
| And CLR (in a sibling comment) refers to the 1990 first
| edition.
| rerdavies wrote:
| The most efficient implementation of Lagrange Interpolators is
| O(N).
|
| Calculate the left Product(0..i-1) of each term in left-to-
| right order, and the Product(i+1..N-1) of each term in right-
| to-left order. Your mileage may vary on a GPU.
| steppi wrote:
| I think you're talking about evaluating the interpolating
| polynomial at a single value, but others are talking about
| getting all of the coefficients of the interpolating
| polynomial.
| pkhuong wrote:
| GP is referring to
| https://en.wikipedia.org/wiki/Lagrange_polynomial
|
| Build a table left[j] of product-of-ratios for i < j, and
| another right[k] for i > k, each in linear-time, and
| combine the two to find the product-of-ratio for i != m in
| linear time as well.
| steppi wrote:
| Right, and if we are trying to get all coefficients and
| are thus performing the polynomial multiplications
| symbolically, then performing the multiplications naively
| will take O(n^2) time but using FFT allows it to be done
| in O(n log n) time.
| duped wrote:
| What applications require polynomial interpolation with such
| high order that an FFT is worth it? Why not use sinc
| interpolation if you're computing an FFT anyway?
|
| sidenote: polynomial multiplication != interpolation
| photon12 wrote:
| Zero-knowledge proof generation, for one.
|
| Edit, see for example:
| https://www.zprize.io/prizes/accelerating-ntt-operations-
| on-...
| steppi wrote:
| That's pretty neat. I didn't even think of finite fields
| when I made my other comment. Another example where
| mathematical curiosities prove useful in cryptography.
| ninepoints wrote:
| FFT is almost never _not_ worth it. Even at small scales, you
| should be using the FFT (and you are a daily user in audio
| /video/image contexts).
| duped wrote:
| We have very different experiences then. "Small" scale to
| me is n < 64, which is right on the threshold of whether
| it's faster to run an n^2 algorithm that could be nlogn
| with an FFT (for 1-dimensional vectors) - depending on the
| algorithm, implementation, and hardware of course.
|
| I've benchmarked FFT/non-FFT based approaches for almost
| everything in that scale for the last ten years and gotten
| quite different results.
| enriquto wrote:
| An interesting case is two-dimensional interpolation. For
| example, for precise image resampling. In that case, due
| to the separability, the FFT is still O(nlog(n)), thus
| just as fast and much more precise than space-domain
| computations with 2D splines. Try to rotate 10 degrees
| iteratively an image 36 times with splines and with fft-
| based methods. Even 7 tap splines (which is a _huge_
| order) will utterly destroy the image contents; but it is
| just peanuts to the fft.
| steppi wrote:
| There's an aside in the linked section of CLRS that mentions
| that FFT based polynomial multiplication can be used for fast
| evaluation of Lagrange interpolation. I'm not aware of
| applications though, Lagrange interpolation isn't numerically
| stable; there are better interpolation methods for large
| datasets. It's more of a mathematical curiosity.
| rerdavies wrote:
| Not sure why you would use FFTs to evaluate Lagrange
| interpolators, when Lagrange interpolators have a very
| pretty O(N) implementation.
| steppi wrote:
| Example please?
| rerdavies wrote:
| See Lagrange interpolators for the general form of the
| interpolator used in this article.
| https://en.wikipedia.org/wiki/Lagrange_polynomial
|
| There's a neater general formula for generating Lagrange
| interpolator polynomials of any order that avoids having to
| perform matrix inversion.
|
| Interpolated values can be calculated in O(N) time, so Lagrange
| interpolation is always faster than FFT interpolation. Calculate
| the left Product(0..i-1) of each term in left-to-right order. If
| you carry the value from term to term, you only need one
| multiplication per term. and the Product(i+1..N-1) of each term
| in right-to-left order.
|
| Lagrange interpolators with degrees that are close to Pi _N tend
| to be smoother, with less aliasing, when processing audio
| signals. (N=3, 6, 9, 13, &c). Truncating the Langrange polynomial
| is akin to windowing in FFT. Having an order that's roughly Pi_N
| ensures that the analogous window of the Langrange polynomial has
| smaller tail values.
| alleycat5000 wrote:
| Check out https://www.chebfun.org/ if you want to see some crazy
| stuff you can do with polynomials!
|
| Also the book Approximation Theory and Approximation Practice
|
| https://epubs.siam.org/doi/book/10.1137/1.9781611975949
|
| is really great, by the author of Chebfun.
| IIAOPSW wrote:
| Approximate approximation theory practice.
| the__alchemist wrote:
| Great article, and I've been applying this recently in a few
| unrelated projects. Of note, research on the internet produces
| some surprising red herrings that make this seem more complicated
| than it is. This article's approach, by contrast, is nice and
| apt. Spline? Lagrange polynomials? Newton polynomials? etc. This
| gets especially messy when looking for interpolation in 2 and 3D.
|
| I came to the same conclusion as the article: The answer is to
| solve a system of linear equations with 4 unknowns.
| amelius wrote:
| How do you prevent overfitting?
| sritchie wrote:
| You don't! These techniques are for getting a perfect fit.
| unlikelymordant wrote:
| The polynomial will go through your samples exactly, but you
| might have arbitrarily high error between the samples (
| runges phenomenon), if your data is noisy at all it is much
| better to fit a low order polynomial to your data that
| minimises error. If your curve has no noise and you are free
| to choose the sample locations , minimax is the best way to
| guarantee minimum error between samples.
___________________________________________________________________
(page generated 2023-02-08 23:00 UTC)