[HN Gopher] Revisiting Knuth's "Premature Optimization" Paper
       ___________________________________________________________________
        
       Revisiting Knuth's "Premature Optimization" Paper
        
       Author : signa11
       Score  : 36 points
       Date   : 2025-06-26 11:17 UTC (3 days ago)
        
 (HTM) web link (probablydance.com)
 (TXT) w3m dump (probablydance.com)
        
       | mjd wrote:
       | This is my all-time favorite paper. It's so easy to read, and
       | there's so much to think about, so much that still applies to
       | everyday programming and language dedign.
       | 
       | Also there's Knuth admitting he avoids GO TO because he is afraid
       | of being scolded by Edsger Dijkstra.
       | 
       | https://pic.plover.com/knuth-GOTO.pdf
        
         | apricot wrote:
         | Reading Knuth is always a pleasure.
         | 
         | From the paper:
         | 
         | "It is clearly better to write programs in a language that
         | reveals the control structure, even if we are intimately
         | conscious of the hardware at each step; and therefore I will be
         | discussing a structured assembly language called PL/MIX in the
         | fifth volume of The art of computer programming"
         | 
         | Looking forward to that!
        
       | subharmonicon wrote:
       | Love this paper and read it several times, most recently around
       | 10 years ago when thinking about whether there were looping
       | constructs missing from popular programming languages.
       | 
       | I have made the same point several times online and in person
       | that the famous quote is misunderstood and often suggest people
       | take the time to go back to the source and read it since it's a
       | wonderful read.
        
       | wewewedxfgdf wrote:
       | It's no longer relevant. It was written when people were writing
       | IBM operating systems in assembly language.
       | 
       | Things have changed.
       | 
       | Remember this: "speed is a feature"?
       | 
       | If you need fast softwqare to make it appealing then make it
       | fast.
        
       | godelski wrote:
       | I think the problem with the quote is that everyone forgets the
       | line that comes after it.                 We should forget about
       | small efficiencies, say about 97% of the time: premature
       | optimization is the root of all evil.            vvvvvvvvvv
       | Yet we should not pass up our opportunities in that critical 3%.
       | A good programmer will not be lulled into complacency by such
       | reasoning, he will be wise to look carefully at the critical
       | code; but only after that code has been identified.
       | ^^^^^^^^^^
       | 
       | This makes it clear, in context, that Knuth defines " _Premature
       | Optimization_ " as "optimizing before you profile your code"
       | 
       | @OP, I think you should lead with this. I think it gets lost by
       | the time you actually reference it. If I can suggest, place the
       | second paragraph after                 > People always use this
       | quote wrong, and to get a feeling for that we just have to look
       | at the original paper, and the context in which it was written.
       | 
       | The optimization part gets lost in the middle and this, I think,
       | could help provide a better hook to those who aren't going to
       | read the whole thing. Which I think how you wrote works good for
       | that but the point (IMO) will be missed by more inattentive
       | readers. The post is good also, so this is just a minor critique
       | because I want to see it do better.
       | 
       | https://dl.acm.org/doi/10.1145/356635.356640 (alt) https://sci-
       | hub.se/10.1145/356635.356640
        
         | Swizec wrote:
         | Amdahl's Law is the single best thing I learned in 4 years of
         | university. It sounds obvious when spelled out but it blew my
         | mind.
         | 
         | No amount of parallelization will make your program faster than
         | the slowest non-parallelizable path. You can be as clever as
         | you want and it won't matter squat unless you _fix the
         | bottleneck_.
         | 
         | This extends to all types of optimization and even teamwork.
         | Just make the slowest part faster. Really.
         | 
         | https://en.wikipedia.org/wiki/Amdahl%27s_law
        
       | bluGill wrote:
       | If you have benchmarked something then optimizations are not
       | premature. People often used to 'optimize' code that was rarely
       | run, often making the code harder to read for no gain.
       | 
       | beware too of premature pessimization. Don't write bubble sort
       | just because you haven't benchmarked your code to show it is a
       | bottleneck - which is what some will do and then incorrectly cite
       | premature optimization when you tell them they should do better.
       | Note that any compitenet languare has sort in the standard
       | library that is better than bubble sort.
        
       | yubblegum wrote:
       | Ironically, all pdfs of the famous paper have atrocious
       | typesetting and are a pain to read.
        
       | layer8 wrote:
       | > Usually people say "premature optimization is the root of all
       | evil" to say "small optimizations are not worth it" but [...]
       | Instead what matters is whether you benchmarked your code and
       | whether you determined that this optimization actually makes a
       | difference to the runtime of the program.
       | 
       | In my experience the latter is actually often expressed. What
       | else would "premature" mean, other than you don't know _yet_
       | whether the optimization is worth it?
       | 
       | The disagreement is usually more about small inefficiencies that
       | may compound in the large but whose combined effects are
       | difficult to assess, compiler/platform/environment-dependent
       | optimizations that may be pessimizations elsewhere, reasoning
       | about asymptotic runtime (which shouldn't require benchmarking --
       | but with cache locality effects sometimes it does), the validity
       | of microbenchmarks, and so on.
        
         | parpfish wrote:
         | The way I often hear it expressed has nothing to do with small
         | efficiency changes or benchmarking and it's more of a
         | yagni/anticipating hyper scale issue. For example, adding some
         | complexity to your code so it can efficiently handle a million
         | users when you're just fine writing the simple to read and
         | write version for hat isn't optimal but will work just fine for
         | the twenty users you actually have.
        
       | userbinator wrote:
       | I understood it to mean that optimising for speed at the expense
       | of size is a bad idea unless there are extremely obvious
       | performance improvements in doing so. By default you should
       | always optimise for size.
        
       | dan-robertson wrote:
       | I like this article. It's easy to forget what these classic CS
       | papers were about, and I think that leads to poorly applying them
       | today. Premature optimisation of the kind of code discussed by
       | the paper (counting instructions for some small loop) does indeed
       | seem like a bad place to put optimisation efforts without a good
       | reason, but I often see this premature optimisation quote used
       | to:
       | 
       | - argue against thinking about any kind of design choice for
       | performance reasons, eg the data structure decisions suggested in
       | this article
       | 
       | - argue for a 'fix it later' approach to systems design. I think
       | for lots of systems you have some ideas for how you would like
       | them to perform, and you could, if you thought about it, often
       | tell that some designs would never meet them, but instead you go
       | ahead with some simple idea that handles the semantics without
       | the performance only to discover that it is very hard to
       | 'optimise' later.
        
       | osigurdson wrote:
       | The real root of all evil is reasoning by unexamined phrases.
        
       | osigurdson wrote:
       | It is generally better just to focus on algorithmic complexity -
       | O(xN^k). The first version of the code should bring code to the
       | lowest possible k (unless N is very small then who cares). Worry
       | about x later. Don't even think about parallelizing until k is
       | minimized. Vectorize before parallelizing.
       | 
       | For parallel code, you basically have to know in advance that it
       | is needed. You can't normally just take a big stateful / mutable
       | codebase and throw some cores at it.
        
       ___________________________________________________________________
       (page generated 2025-06-29 23:00 UTC)