[HN Gopher] Boycott Numerical Recipes (2007)
___________________________________________________________________
Boycott Numerical Recipes (2007)
Author : arthurmorgan
Score : 145 points
Date : 2022-04-03 08:06 UTC (14 hours ago)
(HTM) web link (mingus.as.arizona.edu)
(TXT) w3m dump (mingus.as.arizona.edu)
| spekcular wrote:
| Issues with the license aside, there are substantial problems
| with the core algorithmic material of the book. The provided
| codes and approaches are often just plain incorrect.
|
| Quoting from a review: "The authors of Numerical Recipes were not
| specialists in numerical analysis or mathematical software prior
| to publication of this book and its software, and this deficiency
| shows WHENEVER WE TAKE A CLOSE LOOK AT A TOPIC in the book
| [editor's emphasis]. The authors have attempted to cover a very
| extensive range of topics. They have basically found `some' way
| to approach each topic rather than finding one of the best
| contemporary ways. In some cases they were apparently not aware
| of standard theory and algorithms, and consequently devised
| approaches of their own. The MEDFIT code of section 14.6 is a
| particularly unfortunate example of this latter situation.
|
| "One should independently check the validity of any information
| or codes obtained from `Numerical Recipes'...."
|
| Source: https://www.uwyo.edu/buerkle/misc/wnotnr.html
| bitwize wrote:
| > These conditions are so restrictive that of course everybody
| breaks them, often without knowing it. Thus, the NR license makes
| itself a farce.
|
| It doesn't necessarily make the license a farce. An Oracle-like
| business model can be established by auditing the users of the NR
| library and assessing fees for license violations, including the
| inadvertent ones.
| enriquto wrote:
| The GSL library is indeed great! It provides a free
| implementarion of all methods described in NR, and _many_ more.
| The source code of GSL is somewhat readable, but I find NR still
| very helpful for the textual descriptions of the methods.
|
| Also, the venerable ccmath library by the late Daniel Atkinson is
| an extraordinary jewel of readability, simplicity, portability
| and performance among numerical codes.
| [deleted]
| NelsonMinar wrote:
| It's worth putting this book in historical context. It was first
| published in 1986. Numerical programming like this was very much
| a black art and the book was an well researched, documented,
| usable set of algorithms with (mostly) working code. Open source
| code was a very new idea (FSF was only established a year before,
| and while there are earlier precedents like BSD source it was all
| very new.) Source code distributions were awkward; absolutely
| nothing like github of course, just a few FTP servers which maybe
| you could get code from if you knew they existed and had Internet
| access. Typing in code from a book seemed perfectly reasonable.
| B1FF_PSUVM wrote:
| > Source code distributions were awkward;
|
| Tapes by mail was a thing ...
| gnufx wrote:
| To support the sibling: If we're talking history, the
| assumption in the early 80s was that software was shared, at
| least in the technical computing circles I lived in and to the
| extent it was portable enough. We were suspicious of people who
| kept code under wraps -- with good reason in some cases that
| dubiously reproduced data post hoc, which was an early lesson.
| Free Software per se was a reaction to that culture being
| eroded around rms earlier than I saw it, and it's basically
| what I've done since I started.
|
| I'm not convinced about "black art" given, for instance, the
| Harwell library and later NAG, although they weren't free
| software. (I knew about Ian Grant's work as an undergraduate,
| though he didn't teach me.) Later I knew how to get code from
| Netlib by email before we had general trans-Atlantic network
| access.
|
| I found Nick Higham has a soft spot for the undergraduate
| textbook we had, from rather earlier, Acton's Numerical Methods
| that Work.
| kragen wrote:
| Open source code was a very new idea, perhaps going back no
| further than the first SHARE meeting 31 years earlier in 01955,
| but imposing legal restrictions on source code distribution was
| an even newer idea.
|
| (But what about open-source numerical methods? Well, EISPACK
| had been published in 01974, 12 years before NR came out.)
|
| Typing in code from a book _was_ perfectly reasonable.
|
| The issue today, though, is not what restrictions the book's
| copyright holders imposed, or purported to impose, in 01986.
| The issue is the restrictions they impose _today_.
| Aardwolf wrote:
| > 01974
|
| People from the year 100000 might be offended by this
| notation. But in seriousness, what is the extra 0 for? Even
| without it, 1974 and 10000 are perfectly distinguishable, and
| for computers good methods to store long dates use binary
| storage formats, so amount of ASCII decimal digits doesn't
| matter there
| sitkack wrote:
| It is for Y10k compliance,
| https://longnow.org/ideas/02013/12/31/long-now-years-five-
| di...
| [deleted]
| throwaway81523 wrote:
| There is no date on this but it is pretty old, since I remember
| thekerfluffle from probably 10+ years ago. I don't know if the NR
| license was ever adjusted but the software was nothing special
| anyway. I did learn a lot from the books, from a semi-neophyte
| perspective. I think serious numerics experts pooh pooh them
| though.
| pletnes wrote:
| I always had the impression that NR was about getting a broad
| overview of the field. I imagine each chapter covers at least a
| whole field of numerical research. If you're working on
| applications of numerics, it is indeed important to know what's
| available out there. The NR code itself I've never used,
| really.
| throwaway81523 wrote:
| It's more of an intro level book, aimed at scientists rather
| than numerics geeks or or people into theoretical math. So it
| does a good job of presenting things simply and in ways that
| don't require navigating too many fine points. That approach
| mostly works, though it leaves openings for things to go
| wrong. The topics (root finding, integration, etc.) are
| similar to what you'd see in an undergrad numerics course. It
| specifically doesn't have anything about PDE's, since it
| defines that as the boundary between introductory and
| advanced topics.
| nereye wrote:
| FWIW, there is a whole chapter (17 in the first edition, 20
| in the third/current edition, don't have a second edition
| in front of me but would be surprised if it didn't have
| one) on Partial Differential Equations. It's about 50+
| pages in the first edition and 70+ in the third.
|
| But I take your point, the chapter does admit to being
| intended as "the briefest possible useful introduction" and
| notes that the topic would require another volume entirely
| dedicated to it.
| jwilk wrote:
| HTTP response headers say: Last-Modified: Thu,
| 10 Aug 2006 17:08:26 GMT
| crispyambulance wrote:
| I don't get what the big deal is. I remember this book. It was
| useful back in the day, but really, has anyone really ever been
| pinched as an NR license violator? I expect some orgs paid for
| licenses, it's not an exorbitant amount. It probably amounted
| to little more than pocket change for the professor/author over
| the years. Is the guy lawsuit-crazy, I didn't see evidence of
| that.
|
| If some grad student, postdoc or independent uses it and
| doesn't quite follow the license terms, I expect nothing will
| happen. Though they would probably do well to update their code
| base, eventually, for technical as well as license reasons if
| they "scale up" beyond using the code for a paper or two. But
| the tone of the OP's call to "Boycott" is a bit over the top,
| RMS-style, and not in a good way.
| sam36 wrote:
| Heh, I haven't seen or read NR but added it to my wishlist after
| watching Terry Davis mention it [at 8:15]
| https://www.youtube.com/watch?v=25hLohVZdME Seemed to work well
| enough for him.
| moonbug wrote:
| I really can't imagine anyone finding NR now, except as a dusty
| relic on a lab office shelf
| greenyoda wrote:
| The current license can be found here:
| http://numerical.recipes/aboutNR3license.html
| forgotpwd16 wrote:
| Seems nr.com is no more. Wonder why they decided (or should I
| say how much they're offered) to switch away from such a
| premium domain.
| nxpnsv wrote:
| I used NR for a gauss elimination in FORTRAN back in the 90s.
| Next time I needed NR, I just used GSL. Nowadays I mostly stick
| to numpy...
| magicalhippo wrote:
| What are some good, current sources for learning about and
| implementing numerical algorithms like those in Numerical
| Recipes?
|
| I get that for most just using Numpy, GSL or similar library
| would be sufficient, but what if you really want to implement
| them yourself.
| owlbite wrote:
| The main problem is each subsection of a chapter in NR is
| really a subfield in and of itself, and to adequately describe
| a decent modern implementation is at least one if not multiple
| books.
|
| First make sure you're familiar with background issues in
| numerical analysis/computation such that you're generally
| familiar with IEEE floating point and its foibles, numerical
| stability, basic error analysis etc.
|
| Figure out which algorithm you're interested, find a couple of
| open source implementations (academics generally generate a
| vast array of these, though many may become abandonware), look
| at any papers they are associated with/mention and read those.
| Now you're ready to start writing.
|
| Once you have a basic implementation, most fields have a set of
| standard problems they benchmark on. Find those and see how you
| compare on numerical accuracy and performance against the
| libraries you looked at before.
| lambda wrote:
| > First make sure you're familiar with background issues in
| numerical analysis/computation such that you're generally
| familiar with IEEE floating point and its foibles, numerical
| stability, basic error analysis etc.
|
| So, what's the book for learning that?
| [deleted]
| Fronzie wrote:
| The Eigen library (https://eigen.tuxfamily.org) has great
| attention to numerical robustness and accuracy in it's
| implementation. The documentation is good, but for the
| numerical discussions, you'd have to read the source code.
| buescher wrote:
| There was a numerical analyst at JPL back in the day that
| absolutely savaged NR. Remember, the kind of engineering they do
| at JPL has to be correct, not just plausible. Also, noncommercial
| alternatives to things like Matlab (octave, the python interfaces
| to Netlib libraries, etc) weren't as mainstream then, and science
| and engineering research had a bigger "get a book and roll your
| own" ethic.
| https://web.archive.org/web/19990202153232/http://math.jpl.n...
|
| I've seen it breed horrors - "senior" people instructing interns
| to implement its matrix inversion routine line-by-line in Matlab.
| Today it would be Python.
|
| Oh, and here's a list of alternatives, from the heyday of NR:
| https://web.archive.org/web/19990202141044/http://math.jpl.n...
| B1FF_PSUVM wrote:
| I just like this phrase from the Usenet comments in the first
| link:
|
| > Their code has the implicit assumption that all elliptic
| problems have homogeneous Dirichlet boundary conditions!
| willis936 wrote:
| IANA numerical expert, but I have written a lot of MATLAB in the
| past decade.
|
| There are NR algorithms not present in MATLAB that are very
| useful. When trying to find the nearest index in one very large
| monotonically increasing vector in another you could use interp1,
| but what is much faster is to do a binary search. There is no
| binary search function in MATLAB. It's lightning fast to use the
| NR-C algorithm for a binary search in MATLAB. If you want to find
| many nearest indexes, then interp1 likely faster.
| topaz0 wrote:
| Note that the article is not about denigrating the algorithms,
| and is instead entirely about the license restrictions.
| willis936 wrote:
| I know but a lot of HN comments about the algorithm quality.
| It seems like a moot point because even if the algos were
| obsolete, the license issues remain.
|
| I don't think the algos are obsolete though.
| chaboud wrote:
| I've used Numerical Recipes in C heavily since 1996, but I've
| never used the code (which isn't great). Instead, I've used it as
| a reference to understand approaches and algorithms. Any code
| I've made using that book has been my own.
|
| If you know how to code and want a leg up on understanding an
| algorithm, NR is a very convenient structural cheat sheet.
| owlbite wrote:
| But its algorithms are often incorrect, inefficient and
| numerically unstable. Just use a library written by an expert
| in whatever your area of interest is.
| toast0 wrote:
| Isn't this book essentially a library written by experts[1]
| in numerical processing?
|
| You can't _just use_ a library by experts. You have to know
| or learn how to evaluate said library for correctness,
| efficiency, and stability (numerical and operational). If you
| know how to do that, you may be able to use a flawed library
| as a tool to get the shape of your processing quickly and
| fill in the rest of the details yourself.
|
| [1] They've published a major book through three editions and
| the book cover calls them leading scientists.
| lupire wrote:
| Don't tell me what to boycott unless it's the worst. Tell me
| what's good.
| jlebar wrote:
| I remember being disappointed by this book.
|
| I wanted to read about algorithms for choosing a random float in
| [0,1], and IIRC its only suggestion was randint()/INT_MAX. This
| is wrong in part because the smallest nonzero value you'll ever
| choose is 1/INT_MAX, which is much larger than the smallest
| nonzero float.
|
| Is there a better book out there? I was hoping for something
| fabulous like Hacker's Delight but for floating point.
| joppy wrote:
| Because of how floating-point numbers are distributed, you
| should expect a random number drawn uniformly from [0,1] to
| basically never hit the smallest floating-point number: hitting
| something in the range [0, 2^-n] should happen with probability
| 2^-n.
|
| I agree there is much more one could want out of a uniform
| random float generator than just k/N where k is a integer drawn
| uniformly from {0, 1, ..., N}, but hitting arbitrarily small
| floating point numbers is not on this list.
|
| If you are intending to sample numbers in the range [0,1] such
| that each float occurs with equal probability then that's fine,
| but it's certainly not uniformly random on [0,1].
| hansvm wrote:
| Steelmanning their argument, an ideal output would consist of
| every possible float in [0,1] occurring with an appropriate
| probability -- some of those vanishingly unlikely. Kind of
| like how randint() should be capable of returning every
| possible integer rather than, say, every multiple of 64.
|
| It's fine to counter that most of the time the difference
| wouldn't matter or that it might be problematic to compute
| for some reason or another, but I don't think it's reasonable
| to critique the idea on the grounds of the result not being
| uniform when they didn't actually ask for each float to have
| equal probability.
| torotonnato wrote:
| You can apply the core ideas of Hacker's Delight to floats too.
| Your example problem, taken from [0]: float
| frand( int *seed ) { union {
| float fres; unsigned int ires; };
| seed[0] *= 16807; ires = ((((unsigned int)seed[0])>>9
| ) | 0x3f800000); return fres - 1.0f; }
|
| You basically assemble a float building a random mantissa and
| assigning the right exponent and sign bits.
|
| It's not portable, but it's a very neat trick!
|
| [0] https://iquilezles.org/www/articles/sfrand/sfrand.htm*
| orlp wrote:
| It's a neat trick, but can not generate all floating point
| values. See my other comment for that.
| prionassembly wrote:
| The source of disagreement is: his code generates floating
| point approximations of uniformly-distributed real values
| in [0,1]; not random floats greater than 0 and smaller than
| 1.
|
| This distinction has little to do with the subtlety of
| floats. You can e.g. generate numbers up to 100 with
|
| - 5d20 (5 dice of 20 sides),
|
| - 20d5 or even
|
| - 16d6+1d4.
|
| [Edit: the point of this exercise: these are different
| representations of the numbers up to 100.]
|
| But even assuming each die with D sides has equiprobable
| results with prob 1/D, these choices don't have the same
| probability distribution. To convince yourself of this,
| compare 1d12 (the probability of getting 1 is 1/12) with
| 2d6 (the probability of getting 1 is 0).
| torotonnato wrote:
| If you want max entropy and normally distributed values
| then, I think, there's no other way to go. Nice post btw
| orlp wrote:
| What do you mean by "normally distributed values"? Your
| reply also doesn't follow a normal distribution by the
| way. Did you mean uniform? For uniform values there is a
| way to go and my other reply exactly describes how.
| torotonnato wrote:
| Of course, it's uniform. I always swap the two names
| randomly hehe. I understand your code. The difference
| with the snippet I posted (which, btw, is not mine) is
| that the difference between successive floats is always a
| constant, i.e. there is no rounding. It's fast, has 23
| bits of entropy and it's suited for constrained
| platforms/small demos [0], since it can be coded with
| very few asm instructions.
|
| I'm not saying that your solution hasn't its merits, well
| done.
|
| [0] The author of the code is a known member of the
| demoscene
| orlp wrote:
| Oh, I know very well who iq is :) And for a visual demo
| it might make perfect sense to sacrifice completeness of
| a solution for speed / code size. But to me, "pick a
| random number from {k/2^23 : 0 <= k < 2^23, k [?] N}" and
| "pick a random number uniformly at random from [0, 1)"
| are not the same.
|
| All operations (add/sub, mul/div, sin, exp, log, etc...),
| with IEEE754 floats are defined using real arithmetic,
| and then assumed to round to the nearest representable
| floating point value. I don't see why random number
| generation, at least as a _default_ before micro-
| optimizing, should be any different.
| torotonnato wrote:
| You are right, I was uncomfortable with the uneven
| density of the floats approaching zero, but it's just an
| inherent property of their representation. Have a nice
| day :)
| orlp wrote:
| I don't know about a better book, but I do know how to generate
| a random float in [0, 1] very efficiently and accurately. A
| 2007 paper describing the technique is Generating Pseudo-random
| Floating-Point Value by Allen B. Downey. I have implemented it
| here in Rust: https://gist.github.com/orlp/121bb95361cc74b88aff
| 79a7335a1a4.... This is for generating in [0, 1), but it is
| trivially changed to allow 1.0 by changing
| while exponent < -127 || exponent > -1 {
|
| into while exponent < -127 {
|
| It uses a single call to the RNG 255/256 = 99.61% of the time,
| if it has to use an extra RNG call the chance it then needs
| another after that is 2^-32, ad infinitum, which is negligible.
| More calls to the RNG could be needed if the exponent it
| generated was out of bounds which is also incredibly,
| incredibly unlikely (~2^-33 for [0, 1), ~2^-127 for [0, 1]).
| Thus the expected number of RNG calls is essentially just
| 255/256 + 2*1/256 ~= 1.00391.
|
| A friend of mine also made a blog post that includes another
| implementation and discusses it:
| https://lokathor.github.io/prng/.
|
| The difference between this technique and virtually all others
| out there on the internet is that this one is actually unbiased
| and can generate every single possible floating point value in
| [0, 1). Unbiased here means that the procedure simulates
| picking a _real_ number uniformly at random from [0, 1] (with
| infinite precision), and then rounding it to the closest
| floating point value. We don 't have to use infinite RNG bits
| to generate our infinite precision real number because we only
| have to generate enough bits such that it is unambiguous which
| floating point number we round to. The actual number of RNG
| bits needed is constant in expectation (see above), but
| unbounded, because our real number could be arbitrarily close
| to the boundary between two floating point values.
| jwilk wrote:
| That can't be right: // Returns k with
| probability 2^(k-1).
|
| Did you mean 2^(-k-1)?
| orlp wrote:
| Yes, woops, fixed.
| mattpharr wrote:
| FWIW that algorithm for uniform floating-point sampling in
| [0,1] is actually originally due to: Walker, Alastair J.
| "Fast generation of uniformly distributed pseudorandom
| numbers with floating-point representation." Electronics
| Letters 10 (1974): 533-534.
| raymondh wrote:
| The Python docs1 include an implementation of Allen Downey's
| algorithm: from random import Random
| from math import ldexp class FullRandom(Random):
| def random(self): mantissa =
| 0x10_0000_0000_0000 | self.getrandbits(52)
| exponent = -53 x = 0 while
| not x: x = self.getrandbits(32)
| exponent += x.bit_length() - 32 return
| ldexp(mantissa, exponent)
|
| 1 https://docs.python.org/3/library/random.html#recipes
| orlp wrote:
| That's almost correct, but not quite (depending on your
| definition of correct). It fails to account for the problem
| that Allen Downey's paper so carefully took care of. I
| think it's best shown with an image (exaggerated to have a
| 2-bit mantissa and 2-bit exponent):
|
| https://i.imgur.com/5TerZkm.png
|
| Top is Python, bottom is correctly rounding to nearest
| float. Python's implementation does not round correctly to
| the nearest floating point value, it _floors_. This means
| it oversamples those floating point numbers with mantissa
| 0, and can not sample an inclusive range at all.
| ouid wrote:
| Choosing a random float doesn't mean very much without a
| distribution.
| pmoriarty wrote:
| Randomness is not a property of numbers but of distributions.
| raegis wrote:
| Actually I believe it is better to say randomness is a
| property of a _sequence_ of numbers. So a "random
| sequence" has members chosen independently of each other,
| but all from the same probability distribution. And random
| number generators create random sequences which follow a
| given distribution.
| lr1970 wrote:
| Floating point numbers are represented in a form
| mantissa*2^exponent and they are distributed non-uniformly over
| their dynamic range. The density of normalized 64-bit floats
| around zero is about 10^-320, while around one is about 10^-16.
| Therefore, do not expect to get the smallest positive float in
| sampling from a uniform distribution over [0,1]. There is a
| good tutorial about floating point numbers and their pitfalls
| written for computer scientists [0] where Figure 1 on page 7
| (pdf page count 3) shows how non-uniformly floating point
| numbers are distributed over the number line.
|
| [0]
| https://pages.cs.wisc.edu/~david/courses/cs552/S12/handouts/...
| Someone wrote:
| I think the OP knows that, but complains about the fact that
| _randint() /INT_MAX_ (which my mind autocorrects to _rand()
| /RAND_MAX_) takes on at most _RAND_MAX+1_ different values
| (that _may_ be as low as 32,768), and returns each one with
| equal probability, so that it returns zero 1 in (RAND_MAX+1)
| times, on average, and won't ever return most possible floats
| in _[0,1]_
|
| Even if you decide that returning only _RAND_MAX+1_ different
| values, I think you should divide the [0,1] interval into
| _RAND_MAX+1_ equal sized intervals and return the floating
| point value at the center of one of such intervals. The given
| solution has both zero and one stand for a value in an
| interval of size _1 /(2_RAND_MAX) _, but picks them with
| probability_ 1/RAND_MAX _, so it picked both of them way too
| often.
|
| Aside from that there's:
|
| - _rand() _, on many systems, is a very poor random number
| generator.
|
| - the API of _rand()* is poor. Because there's only one
| global state, you can't run multiple independent generators,
| and it doesn't help you much as soon as you want to generate
| anything other than a discrete distribution with _RAND_MAX+1_
| items (a number that even is implementation dependent, so
| that portable code needs special care even if it may not be
| needed on most platforms). For example, because _RAND_MAX_
| may be even, you can't even simulate a fair coin toss with a
| single call of _rand()_.
| shirleyquirk wrote:
| If you want a random number in [0,1] with a uniform
| distribution from N random bits, I don't see how you can expect
| an epsilon of less than 1/(2**N-1)
| adrianN wrote:
| Choosing a random float is not the same as choosing a random
| number.
| morelisp wrote:
| What applications call for a "random float in [0,1]" with
| distribution defined in the space of fp representation and
| not the usual kind?
| adrianN wrote:
| For example you when you want to do fuzzing of some
| algorithm that uses floats, a random float might come in
| handy.
| morelisp wrote:
| In [0,1]?
|
| (I consider this answer nearly disingenuous. It's
| obviously not what OP was talking about.)
| orlp wrote:
| Sure, but who said N has to be sizeof(int)? And besides that,
| the distance between x and successor(x) is not constant
| throughout the range of [0, 1).
| lupire wrote:
| N is sizeof(float), which is conveniently the same as
| sizeof(int).
|
| > And besides that, the distance between x and successor(x)
| is
|
| That is approximately balanced by using random() int /
| MAXINT
| morelisp wrote:
| If you want to generate a uniform random float in [0,1), half
| your results have to be in the [0.5,1) range. That space is
| half as dense as the [0, 0.5) range. (And this applies again
| for [0.25,0.5) vs. [0, 0.25), etc, and so on. And this is also
| disregarding the whole space of denormalized floats.) So
| regardless of your method, you're going to have to ignore
| nearly all the floats in the range! (Or generate many more than
| 32 bits and use them non-uniformly to make a uniform output -
| see orlp's comment.)
|
| There are other problems with what you say their suggestion
| was, e.g. INT_MAX should be RAND_MAX, and you need to take some
| care with rounding/conversion, and inclusive vs. exclusive
| upper bound requires careful attention. But with any simple
| approach you just can't use most of the small float space, and
| so the division approach with the details done correctly is
| pretty common!
|
| Edit since rate-limited: orlp, I think we're only in
| disagreement about what qualifies as "very efficient". It's an
| efficient implementation of the more complex problem, but it's
| also much slower than the division. I wouldn't want it to be
| the default float generator for a general-pupose language -
| even most scientific modeling or statistical analysis code
| won't benefit from the increased output space. OP should think
| _very hard_ about how they intend to use values in that range
| before insisting they really need it - the division approach
| really is sufficient and NR is right to recommend it by
| default!
| orlp wrote:
| You can make this approach very efficient using bitwise
| operations, and the count trailing zeros instruction common
| on many CPUs. See my other comment for a reference to the
| paper describing the technique and a link to a low level
| implementation by me.
| bluenose69 wrote:
| It's been a _very_ long time since I 've used NR, but it was
| pretty terrible code that was a poor competitor for NAG and IMSL.
|
| The discussion of algorithms, being informal and well-
| illustrated, was the high point of the books. The code, not so
| much.
|
| I had the Fortran book and so, when the C book came out, I bought
| that as well. The C code seemed to have been auto-generated by
| the Fortran code, and it was really quite terrible.
|
| Functions in the fortran code had arguments for what I'd call
| "work arrays". There was some risk in that, because if the
| documentation told you to give space for 2N+1 floating point
| values and you gave 2N, you might get into problems. I don't
| think that was a real problem in practice, because people were
| accustomed to this general way of working and so they were
| careful.
|
| The C code took a different approach. The functions allocated and
| deallocated space as needed. And, since lots of functions called
| other functions for very simple things, there was a massive
| amount of memory juggling going on. This made the functions be
| wildly slow. So, the point of the boycott was not terribly
| relevant, because I think pretty much everyone trying to use that
| C code just rewrote it, anyway.
| [deleted]
| Animats wrote:
| _I had the Fortran book and so, when the C book came out, I
| bought that as well. The C code seemed to have been auto-
| generated by the Fortran code, and it was really quite
| terrible._
|
| Oh, yes. The arrays in the C code were indexed from 1,
| following the FORTRAN code, with functions to convert the
| subscripts. Pointers to arrays were offset by one element size,
| so they didn't point to the array at all. They pointed to an
| address outside of the array.
|
| I rewrote parts of NR in C++, but ancient, 1990s C++, with pre-
| STL Microsoft collection classes.
| [deleted]
| jeffwass wrote:
| This 100%.
|
| 20+ years ago I took my NR class as part of my physics PhD
| program, it was a required course for all physics and astronomy
| students.
|
| I don't think anybody in the class was planning to use the
| actual recipes from the book as a drop-in placement for their
| work, but it was to explain the methods behind the software we
| all used (Matlab in my group, IDL and IRAF for the astronomers,
| etc), so users can make the best choice for which curve fitting
| or regression or optimisation routines to call, and avoid
| potential pitfalls.
|
| To me it seems like an OS class that uses minix to explain
| behind-the-scenes operation system functionality. Hardly anyone
| in the class will go on to use Minix as their primary OS in
| their future career. (I've not done any formal CS study, so
| forgive the analogy).
| amelius wrote:
| If you're looking for a better book, I can recommend:
|
| Matrix computations, by Golub and van Loan.
| FabHK wrote:
| Oh, the bible. It is a great and thorough book (maybe not ideal
| for beginners). What it covers is quite different from NR,
| though.
| haunter wrote:
| Huh glad it's not about cooking because I was ready to be ranting
| (ie recipes using 1 tablespoon instead of X grams)
| AussieWog93 wrote:
| Funny you mention that. I was ready to rant the opposite (ie,
| that recipes should not include exact measurements).
|
| I really, strongly believe that our obsession with measurement
| and repeatability is a net negative that has taught a whole
| generation to treat cooking like a standardised test rather
| than a creative form of human expression.
|
| You look at the older generations from cultures that value
| cooking and they simply don't give a fuck about what the recipe
| says to do.
| rnd420_69 wrote:
| Most people never have been and never will be interested in
| cooking as a creative outlet, and trying to convince them to
| do so is a highly arbitrary goal only fueled by your own
| subjective interests.
|
| The vast majority of people just want to make some shit to
| eat everyday. And precise measurements are great for that.
| adrianN wrote:
| Precise measurements make the problem of cooking something
| tasty feel much harder than it actually is.
| kergonath wrote:
| Nothing hard in putting stuff in a bowl over a scale
| until the scale says 750g. On the contrary, it's easier
| than trying to guess what the author had in mind when
| they wrote "a pinch" or "a handful".
| AussieWog93 wrote:
| >On the contrary, it's easier than trying to guess what
| the author had in mind when they wrote "a pinch" or "a
| handful".
|
| Again, I think you're missing the fact that whatever the
| author had in mind isn't "right". Do whatever feels right
| to you and as long as your senses are calibrated
| (there'll be a short period of adjustment while you learn
| how ingredients work) it'll come out better than if you'd
| measured and you'll enjoy it more too.
| teddyh wrote:
| > _Do whatever feels right to you and as long as your
| senses are calibrated (there 'll be a short period of
| adjustment while you learn how ingredients work) it'll
| come out better than if you'd measured and you'll enjoy
| it more too._
|
| You're missing that this is _a lot of work_ , and _far_
| from everybody is even remotely interested in doing that.
| Most people would rather be doing useful or entertaining
| things, not cooking the same thing a thousand times to
| discover how much salt "to taste" means. Do not assume
| that everybody's hobby ought to be cooking.
| adrianN wrote:
| For many people precise instructions imply that you need
| to be precise when executing them, i.e. if you deviate
| you ruined it. Most recipes however are not like that and
| the tolerances are actually huge.
| newaccount74 wrote:
| I would say it really depends. Some times exact quantities
| are necessary, some times they aren't.
|
| I hate that almost no recipe specifies salt in grams. How
| much salt fits in half a teaspoon is pretty arbitrary since
| it strongly depends on the type of salt. For many dishes it
| doesn't matter, since you just salt according to taste. But
| if you are making meatloaf, getting the amount of salt right
| is essential, and you can't really taste it unless you like
| tasting raw meat, so an amount in gram would be appreciated.
| creata wrote:
| > that recipes should not include exact measurements
|
| Wouldn't that make cooking much less accessible?
| greggsy wrote:
| It's tricky - people have different tastes, and ingredients
| may change from region to region.
|
| Furthermore, making people measure things with a scale
| rather than a cup adds significant effort and time in the
| scheme of things.
| kergonath wrote:
| It takes less time to get the scale out of the drawer
| than figuring out which bloody cup I have to use.
| AussieWog93 wrote:
| >figuring out which bloody cup I have to use
|
| Unless you're baking, you could literally use any cup you
| have in your house (yes, even a drinking glass!). Unless
| you're baking, it really doesn't matter, and that's the
| beauty of it.
| AussieWog93 wrote:
| I don't think so. You'll fuck up a few times
| catastrophically in the beginning, but very quickly get a
| feel for what works and what doesn't (for example, you can
| go crazy on the garlic without ruining a dish, but you need
| to be a bit more careful with salt).
|
| If anything, the belief that you must boil the potatoes for
| exactly 15 minutes is more likely to make you hate cooking
| than teach you how to enjoy it/excel at it.
| greggsy wrote:
| I was also confused, but about an apparent gripe against
| cooking in sequential order. To be fair, tbsp is universally
| acceptable, but the apparent apparent ignorance of metric
| measures among US-centric authors is ludicrous.
|
| To continue this off-topic rant, recipes have been receiving a
| lot of attention lately, and the medium is well overdue an
| overhaul to cut down on the fluff. A comprehensive wiki, with
| cooking markup (eg. Cooklang) would be amazing and would fill a
| long-overdue gap that hasn't been filled since Larousse
| Gastronomic was released. The idea that recipes are 'owned' by
| someone who happened to publish it first is crazy, but could
| nonetheless be addressed with an attribution license (eg. the
| new MIT-0?)
| sbrorson wrote:
| As somebody who came to scientific computing late in life, after
| spending years as a physicist then an engineer, I will step up to
| defend NR, if only for fun (and because I like the book for what
| it is).
|
| At least one commenter here dismissed NR, saying that it is
| equivalent to an undergraduate numerical analysis book. Well,
| duh! That's one of its strengths! It is a good book for an
| audience like me -- somebody with scientific background but with
| no formal numerical analysis training who needs to write some
| numerical code.
|
| The target reader is an engineer who needs to implement something
| specific. That's a very common scenario -- you are given the job
| of implementing some calculation as part of a larger project.
| You're not an expert in the particular mathematical discipline.
| You don't want to fool around trying to find an algo in some book
| since you don't know which of the zillions of numerical analysis
| books to look at. Also, most books will just give you a general
| algo, not ready-to-run code. Everybody knows how hard it can be
| to translate a wordy description of an algo into real code. If
| you had the budget you'd just hire somebody to do it for you. NR
| solves your problem -- you use the code and your problem is
| solved so you can move on to the next step of your project.
|
| A different commenter here suggested using Golub and Van Loan
| instead. That's laughable. First off, as others pointed out,
| their "Matrix Compuations" book addresses numerical linear
| algebra exclusively. NR is a general-purpose numerical analysis
| book, so covers a much larger topic set. But secondly, Matrix
| Computations is difficult reading for non-mathematicians. Even if
| I needed to implement some sort of matrix decomposition, I would
| find it a pain to use that book as a source.
|
| (Yes, I am sure some testosterone-soaked HN commenter will now
| say that they read "Matrix Computations" in 3rd grade as a way to
| prove how smart they are. Allow me to roll my eyes now.)
|
| The real complaint made in the linked post involves the licensing
| terms of the code in NR. I agree the license stinks by today's
| standards. But the book was originally written back in the 1980s.
| My copy is from the early 1990s. Back then the GPL was basically
| unknown outside of a group of happy hackers at MIT. Software
| licensing certainly not a topic occupying mindspace of ordinary
| scientists or engineers. There was some DOS-based "freeware"
| circulating on 3.5" disks, but the whole "free (libre) software"
| movement was still in the future -- it needed the internet to
| become widespread in order to take off. Finally, I can imagine
| the NR authors wanted some sort of copyright or license which
| prevented wholesale copying of their work by folks who would try
| to profit off it. It's reasonable for them to have wanted to get
| some monetary reward for their hard work. Their license is
| probably an artifact of its time.
|
| In my travels I have seen situations where code used in
| commercial products incorporates this or that function presented
| in NR. For example, I saw the FFT algo used in a test widget. (I
| authored none of the copied code -- I simply observe it in
| projects done by others.) What's funny is that NR's FFT algo is
| not particularly great. However, the guys who needed an FFT
| probably didn't know that. They probably would not have cared
| anyway -- they just wanted a working solution and that's what
| they got from NR. I am sure that little violations of the NR
| license happen all the time. It may not be strictly legal, but I
| also see people making Xerox copies of copyrighted books and
| journal articles all the time too.
|
| Finally, regarding alternatives like GSL, ccmath, or whatever is
| available on netlib, those projects post-date NR. In any event,
| the internet is what made all those things widely available. I
| bought my copy of NR in a bookstore before that happened.
| GlenTheMachine wrote:
| Completely agree. You shouldn't go to NR for the code. You
| should go to it because, even decades after it was first
| published, it's still the only reasonably comprehensive text on
| scientific computing that doesn't assume you're in a graduate
| program in applied mathematics. And I say that as someone who
| actually took those graduate level applied mathematics classes.
| I can pick up NR and understand the basics of some algorithm
| within fifteen minutes, without hurting my brain.
|
| If there's a better book that is accessible to the masses I'd
| like to know about it. But I haven't found it yet.
| gnufx wrote:
| > Finally, regarding alternatives like GSL, ccmath, or whatever
| is available on netlib, those projects post-date NR.
|
| No, Netlib started in April 1985[1], though as a physicist or
| engineer you likely had access to NAG or something similar
| already. It must have made an impression on me as I still
| remember F02ABF... (I hasten to add I hit scientific computing
| early in life.)
|
| 1.
| https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.73...
| owlbite wrote:
| Don't roll your own numerics.
|
| There are normally many subtle gotchas that affect numerical
| stability of a method, as well as great attention to detail
| required to handle degenerate edge cases. And that's before we
| get to subtle bugs that only trigger occasionally or are
| otherwise hard to detect.
|
| It's not like NAG/ISML/HSL etc don't predate NR, in some cases
| by several decades. Libraries were always available, though
| back in the day you may have needed to pay for them.
| enriquto wrote:
| > Don't roll your own numerics.
|
| This is empty advice without some context. Many of us work
| precisely in rolling new numerics.
| mhh__ wrote:
| Boycott C.
|
| In C++ or even better a language like D or Ada or whatever with
| built-in slices you can write assertions that explicitly encode
| assumptions and "what I meant to write, modulo bugs"-isms much
| more easily than in C. How long is that array? How big can that
| float be before the model is unstable? Maybe express the units in
| the type system etc.
| forgotpwd16 wrote:
| Thought Numerical Recipes was used mostly in courses for students
| to learn about numerical methods rather using those pre-made
| routines for building large programs.
| eesmith wrote:
| Most people using NR aren't building large programs.
|
| Think about a student who learned from NR, then in grad school
| needs a method, and in the spirit of "I'm smart, I can do this
| myself", copies the method out of the book.
|
| In larger projects, someone is more likely to be aware of
| alternatives, and the issues related to using source covered
| under the NR copyright.
| Pamar wrote:
| _... hesitantly raises his hand_
|
| Back in ... 1991? I did use the C version of the Simplex
| Method almost straight from the book. I needed a concise
| linear programming optimization module that would be called
| in the middle of a not-really-concise production programming
| system handling (automotive) engines and gearboxes production
| for a couple of FIAT plants.
|
| Basically a short-period scheduler to decide what to produce
| (slave to an immense system running on IBM mainframes; COBOL,
| again).
|
| So I quickly put together the C version (with minimal
| modifications) and it chugged along happily on a VAX for at
| least a decade.
| eesmith wrote:
| Thanks for the counter-example. :)
|
| Did your company also license the code?
|
| To explain my frame of mine, I was thinking how it was used
| in 2007 or later.
|
| In the early 1990s, there wouldn't have been much off-the-
| shelf/free&open-source options. The GNU Scientific Library
| didn't start until 1996, for example (according to
| Wikipedia).
| Pamar wrote:
| No. I was aware of the issue at the time, and this is
| part of the reason for me to alter the source a bit.
|
| In my defense, the people I was working for at the time
| did not even understood the topic when I talked about
| that.
| eesmith wrote:
| I think also that 30+ years ago software copyright was
| farther in the background.
|
| The "Don't Copy that Floppy" propaganda, for example,
| didn't start until 1992.
| hzhou321 wrote:
| I never thought that you are supposed to copy the code from a
| book blindly. If you do that, there must be some library that you
| can use that work better. So treating the code as pseudo code, I
| thought numerical recipes were excellent, better than text books
| in many aspects.
| hpcjoe wrote:
| I used NR a bit more in grad school (late 80s/early 90s) than
| undergrad. I had (have) the Fortran version. I used some of the
| ODE integration bits in some code I worked on in grad school,
| though we put that project away after we switched to a different
| DFT MD code. FWIW, that code (with NR) still compiles/runs. Tried
| it a few months ago.
|
| As an undergrad, I had a research project which used Markov
| methods, and I had to generate some fairly large (at the time)
| matrices. I wrote that code in Fortran, on a PC/AT system (1986,
| so ...). It also ran on an IBM 3090 VF 180 mainframe at Stony
| Brook.
|
| What I recall from that time, was the professor giving me a text
| on Markov methods, and me learning what I needed to construct the
| transition probabilities (matrix elements). I remember writing
| extensive tests within the code, called sanity checks, which I
| reported on. Fortran, no assert statement at the time, and its
| nice to see a note "COMPUTATION IS INSANE DO NOT TRUST" error
| message when you ran into problems.
|
| Honestly, that process taught me a number of things.
|
| 1st, the code you need may or may not have been written by
| someone else, and you really need to make sure you have sanity
| checks built in to establish the portions you can and cannot
| trust. Later when I used the NR code, I added sanity checks into
| their code, to make sure we weren't getting into strange spaces.
|
| 2nd, it is (often very) important to exploit mathematical
| features of your problems (symmetries, relation of derivatives to
| exact closed form equations, etc.) to simplify your calculation.
| Plug and crank is usually the wrong approach, and you need a
| solid understanding of your problem domain to have a fighting
| chance at not producing a garbage simulation.
|
| I've followed these coding practices for decades at this point,
| through many languages and systems. NR was a great way to start
| with these things, but I picked up Hamming's, Hildebrant's, and
| many other numerical analysis books along the way, taught myself,
| and generated code that was hopefully sane.
|
| I don't care so much about the license of NR, as I am not
| distributing code containing it. My thesis advisor warned me
| about that in early 90s. So that was removed from code I shared.
|
| I guess I am a bit sad that I offered said book to offspring as
| she starts a PhD in MechE later this year, and I don't think she
| wants it. She did take my numerical analysis books though, so I'm
| happy about that.
___________________________________________________________________
(page generated 2022-04-03 23:01 UTC)