[HN Gopher] Show HN: Chebyshev approximation calculator
___________________________________________________________________
Show HN: Chebyshev approximation calculator
Hi everyone, here's a web app I made that generates code for
efficiently approximating mathematical functions. This is useful
when performance matters more than perfect accuracy, for example in
embedded systems. The app uses Chebyshev expansions, which despite
their theoretical depth result in suprisingly compact and readable
code in practice. This code is generated for you and using it does
not require any knowledge of the underlying theory. Source code
and more info: https://github.com/stuffmatic/chebyshev-calculator
Author : stuffmatic
Score : 211 points
Date : 2024-10-04 12:09 UTC (10 hours ago)
(HTM) web link (stuffmatic.com)
(TXT) w3m dump (stuffmatic.com)
| jaymzcampbell wrote:
| This is really nicely done! Great work. I fell in love with just
| how efficient these can be and it explained a lot about why many
| of the trig and other mathematical functions implemented in 8-bit
| computers are they way they are.
|
| Here's a rather wonderful original document from the BBC Research
| Department (I had no idea that was a thing) back in 1969 going
| over just what makes them so great
| (https://downloads.bbc.co.uk/rd/pubs/reports/1969-10.pdf).
|
| If all you've come across are Taylor approximations, these things
| can seem a little like magic at first.
| stuffmatic wrote:
| Thank you! Yes it does feel a bit magical, both the
| mathematical aspects and the fact that it all boils down to a
| few lines of code in practice.
| hggigg wrote:
| Rather nice that. I like it.
|
| Doesn't handle divide by zero very well though i.e. f(x)=1/x.
| Should probably consider that as undefined rather than a bad
| expression.
| tgv wrote:
| You could also set the x_min to 0.001 or so.
| stuffmatic wrote:
| Or, since the function expression is just JavaScript,
| singularities can also be avoided like this: x == 0 ? 1 :
| Math.sin(x) / x
| xioxox wrote:
| Excellent work. I wanted to do this recently, but it was
| surprisingly hard to find code to calculate an approximation.
| I've bookmarked it for when I next need a quick approximation for
| a function.
| stuffmatic wrote:
| Thanks for the kind words. I also found it surprisingly hard to
| find working Chebyshev approximation code. Hopefully this
| project will change that :)
| ArmedSandwich wrote:
| This is really nice. Wish I had it back in university, it would
| have made learning the Chebyshev expansions a lot more
| interesting than they were.
| herodotus wrote:
| Lovely. In about 1974 I was paid to write a function, in IBM 360
| assembly language, to compute square roots. I was asked to make
| it as efficient as possible. I was in my last year as an
| undergraduate student. I used a Chebyshev approximation for the
| initial guess (after scaling the input to lie between 0 and 1),
| and then used two (or was it three) unrolled iterations of
| Newton's method to get the solution. First money I ever received
| for writing code!
| aghilmort wrote:
| love this -- still recall how eye-opening my first course was
| on numerical solving and appreciating for the first time yet
| again the sheer potential of compute
| ratedgene wrote:
| Any good books on this?
| aghilmort wrote:
| twenty years back, numerical analysis or numerical methods
| or sometimes numerical optimization are good keywords for
| current trending texts on Amazon etc [1]
|
| alternatively subset of the numerical analysi wikipedia
| page or these algorithms in the book is a good seed filter
| [2]
|
| ideally covers some linalg like gaussian elim, power
| method, newton's root finding dynamic, and issues like
| approximating by discretization, recursion to reduce solve
| iterations, and things like convergence due to numerical
| instability due to IEEE 754 fp16 limits etc
|
| 1. https://en.wikipedia.org/wiki/Numerical_analysis
|
| 2. https://chatgpt.com/share/67002e8c-c5c4-8008-83f9-a6ebc6
| 03ad...
| pclmulqdq wrote:
| I am working on one with some more modern suggestions on
| polynomial approximation, but I think it's still at least
| 3-6 months away.
| amadio wrote:
| Robin Green has some excellent material on the subject,
| which I am linking below.
|
| Faster Math Functions:
|
| https://basesandframes.files.wordpress.com/2016/05/fast-
| math...
|
| https://basesandframes.files.wordpress.com/2016/05/fast-
| math...
|
| Even faster math functions GDC 2020:
|
| https://gdcvault.com/play/1026734/Math-in-Game-
| Development-S...
| Fredkin wrote:
| Math.sin(x)/x (the sinc function) for 7 terms over [-3,3] gives
| coefficients c0...c6 that are all NaNs. Is this a bug?
|
| To work around it, I handled the x near zero case by just forcing
| to 1.0.
|
| if(Math.abs(x) > 1e-8 ){ Math.sin(x)/x } else { 1.0 }
| Majromax wrote:
| > Math.sin(x)/x (the sinc function) for 7 terms over [-3,3]
| gives coefficients c0...c6 that are all NaNs. Is this a bug?
|
| That wouldn't exactly be a bug. The code is undoubtedly
| calculating the Chebyshev coefficients by evaluating the
| function on something like x_j = (xmin) + (xmax - xmin)/2 _(1 +
| cos(pi_ [0..j-1]/(j-1)). If one of those grid points happens to
| be exactly 0, it will try to evaluate Math.sin(0)/0, giving the
| NaN.
|
| Another workaround is to have a slightly asymmetric range, such
| as [-3,+3.0000001]
| stuffmatic wrote:
| Yes, that is a bug, good catch. The app should show an error if
| the function is not defined in all chebyshev nodes. Like you
| have already discovered, it's easy to work around this issue
| for now.
| Zeetah wrote:
| Nice!
|
| I'd like to generate a Chebyshev approximation for a set of X, Y
| sensor values. Any hints on how to modify your code to do that?
| stuffmatic wrote:
| You could look into using the ChebyshevExpansion class
| directly. It takes as one of its arguments a callback that
| returns f(x) for a given x. In your case, f(x) would be your
| sensor values with some suitable interpolation. A more
| ambitious route is to add support for somehow importing tabular
| data into the app.
| atum47 wrote:
| I've been wondering about something and I don't know if this is
| the place to ask it, but here it goes. I saw a video the other
| day about how the Nintendo 64 did not have the ability to
| calculate sine, so they used a lookup table from 0 to 2PI (with
| some clever trick to reduce the size of the table). Would it have
| been possibly to train a NN and store the weights or even a
| function and store the coefficients to calculate the sine,
| cosine?
| mandibles wrote:
| Neural networks often have trigonometric functions internally,
| so it would be massively more computation than necessary.
|
| If you have a few spare CPU cycles, a hybrid approximation
| could start with a sparse lookup table of values as the initial
| guess for a few rounds of a numerical approximation technique.
| Or you just store the first few coefficients of a polynomial
| approximation (as in the OP's work).
| EdgeExplorer wrote:
| Obviously you could train some kind of neural net to calculate
| any function, but this would never make sense for a well-known
| function like sine. Neural nets are a great solution when you
| need to evaluate something that isn't easy to analyze
| mathematically, but there are already many known techniques for
| calculating and approximating trigonometric functions.
|
| Training a neural net to calculate sines is like the math
| equivalent of using an LLM to reverse a string. Sure, you
| *can*, but the idea only makes sense if you don't understand
| how fundamentally solvable the problem is with a more direct
| approach.
|
| It's always worth looking if mathematicians already have a
| solution to a problem before reaching for AI/ML techniques.
| Unfortunately, a lot of effort is probably being spent these
| days programming some kind of AI/ML to solve problems that have
| a known, efficient, maybe even proven optimal solution that
| developers just don't know about.
| kevin_thibedeau wrote:
| The usual conservation trick is to have a table from 0 to PI/2
| and use two additional index bits to generate the other three
| quadrants.
| CamperBob2 wrote:
| Take a look at CORDIC if you aren't familiar with it; that was
| a common trig hack back in the day, and still sees some use in
| the embedded space.
|
| Neural nets can be useful when you have samples of a function
| but no idea how to approximate it, but that's not the case
| here.
| magicalhippo wrote:
| A neural network is essentially just a curve fitter, so yeah.
| You might find this[1] video illuminating.
|
| The main strength of a neural network comes into play when
| there's a lot of different inputs, not just a handful. For the
| simpler cases like sin(x) we have other tools like the one
| posted here.
|
| [1]: https://www.youtube.com/watch?v=FBpPjjhJGhk _But what is a
| neural network REALLY?_
| orlp wrote:
| I've had good results in the past with sollya:
| https://www.sollya.org/.
|
| Note: results. The software itself is a bit of a pain to use.
| richrichie wrote:
| Chebyshev polynomials are so powerful and versatile (in
| approximation) that people think it is a too-good-to-be-true scam
| and do not use them.
|
| One's first go to method should be Chebyshev. Neural nets used as
| a last resort.
| anonzzzies wrote:
| Good memories of the 80/90s doing this by hand for demos and
| games ; extra constraint was 8/16bits.
| Rayhem wrote:
| Obligatory callout to Nick Trefethen (et. al.)'s Chebfun[1] which
| extends this stuff in just about every direction you can think
| of. 'Chebfuns' are to functions as floats are to actual
| mathematical numbers. It's some seriously impressive software.
|
| [1]: https://www.chebfun.org
| roger_ wrote:
| Great work, looks useful!
|
| Any chance you can add a rational function version?
| sfpotter wrote:
| Also pretty easy: import numpy as np p
| = np.polynomial.Chebyshev.interpolate(f, degree, domain=(xmin,
| xmax)) # insert your code to print out some C code
|
| Also strongly recommend some basic familiarity with the theory.
| Approximating `Math.abs(x)` to even a few digits of uniform
| accuracy on any interval containing 0 requires tens if not
| hundreds of thousands of coefficients.
| lainga wrote:
| what does the # insert your code to print
| out some C code
|
| mean?
| sfpotter wrote:
| For instance, maybe I want to define an array of these
| coefficients in C.
|
| I could do: print('double coef[] = {')
| for c in p.coef: print(f' {c:0.16g},')
| print('};')
|
| and copy-paste it wherever I need.
| mgaunard wrote:
| why not evaluate polynomials using horner or estrin methods
| gjm11 wrote:
| With either of those, you're still representing your polynomial
| as a combination of powers: 1, x, x^2, x^3, x^4, etc.
|
| For many purposes it's _much_ better to represent a polynomial
| as a combination of Chebyshev polynomials: 1, x, 2x^2-1,
| 4x^3-3x, etc.
|
| (Supposing you are primarily interested in values of x between
| -1 and +1. For other finite intervals, use Chebyshev
| polynomials but rescale x. If x can get unboundedly large,
| consider whether polynomials are really the best representation
| for the functions you're approximating.)
|
| Handwavy account of why: Those powers of x are uncomfortably
| _similar_ to one another; if you look at, say, x^4 and x^6,
| they are both rather close to 0 for smallish x and shoot up
| towards 1 once x gets close to +-1. So if you have a function
| whose behaviour is substantially unlike these and represent it
| as a polynomial, you 're going to be relying on having those
| powers largely "cancel one another out", which means e.g. that
| when you evaluate your function you'll often be representing a
| smallish number as a combination of much larger numbers, which
| means you lose a lot of precision.
|
| For instance, the function cos(10x) has 7 extrema between x=-1
| and x=+1, so you should expect it to be reasonably well
| approximated by a polynomial of degree not too much bigger than
| 8. In fact you get a kinda-tolerable approximation with degree
| 12, and the coefficients of the best-fitting polynomial _when
| represented as a combination of Chebyshev polynomials_ are all
| between -1 and +1. So far, so good.
|
| If we represent the same function as a combination of powers,
| the odd-numbered coefficients are zero (as are those when we
| use the Chebyshev basis; in both cases this is because our
| function is an even function -- i.e., f(-x) = f(x)), but the
| even-numbered ones are now approximately 0.975, -4.733,
| 370.605, -1085.399, 1494.822, -994.178, 259.653. So we're
| representing this function that takes values between -1 and +1
| as a sum of terms that take values in the thousands!
|
| (Note: this isn't actually exactly the best-fitting function; I
| took a cheaty shortcut to produce something similar to not
| quite equal to the minimax fit. Also, I make a lot of mistakes
| and maybe there are some above. But the overall shape of the
| thing is definitely as I have described.)
|
| Since our coefficients will be stored only to some finite
| precision, this means that when we compute the result we will
| be losing several digits of accuracy.
|
| (In this particular case that's fairly meaningless, because
| when I said "kinda-tolerable" I meant it; the worst-case errors
| are on the order of 0.03, so losing a few places of accuracy in
| the _calculation_ won 't make much difference. But if we use
| higher-degree polynomials for better accuracy and work in
| single-precision floating point -- as e.g. we might do if we
| were doing our calculations on a GPU for speed -- then the
| difference may really bite us.)
|
| It also means that if we want a lower-degree approximation
| we'll have to compute it from scratch, whereas if we take a
| high-degree Chebyshev-polynomial approximation and just
| truncate it by throwing out the highest-order terms it usually
| produces a result very similar to doing the lower-degree
| calculation from scratch.
| hwc wrote:
| Chebyshev is black magic, and I say that even having seen the
| derivation in a graduate-level course.
___________________________________________________________________
(page generated 2024-10-04 23:00 UTC)