[HN Gopher] Premature Optimization is the Root of all Evil
___________________________________________________________________
Premature Optimization is the Root of all Evil
Author : alexzeitler
Score : 42 points
Date : 2022-09-05 18:43 UTC (4 hours ago)
(HTM) web link (martin.leyrer.priv.at)
(TXT) w3m dump (martin.leyrer.priv.at)
| anybodyz wrote:
| Premature optimization may be the root of all evil. But failing
| to architect an application from the start for reasonable
| performance and scalability is the premature root of all
| optimization.
| mjlawson wrote:
| I don't think that this is always possible. For one, I've yet
| to meet anyone who can meaningfully predict:
|
| a. What features are going to be added over the course of your
| application b. What features are going to be popular c. How
| these features scale, independently of the overall application
|
| Architecting for reasonable performance should always be a
| goal. Architecting for scalability, however, is often wishful
| thinking that from my experiences has lead to more harm than
| good.
| djmips wrote:
| It depends on your field of course. With experience, you will
| understand how to do things and you won't even realize how
| many mistakes you avoided by just not 'being stupid'. In my
| area it's often best to do a pre-production phase that is
| more of an R&D phase to explore the tech you need to
| incorporate or invent. This is where you iterate and yes
| optimize. It's hard to estimate at this stage. Then in the
| production phase you can have smoother sailing as the tech is
| established and the content is more or less production line.
| Estimates are accurate. Like a factory.
| kemiller wrote:
| And premature scaling. Would someone please tell every tech
| company's interviewers?
| Genbox wrote:
| This is not an example of premature optimization. It is an
| example of guessing at the problem and missing the mark.
|
| It is best explained by Donald Knuth himself. The full quote is:
|
| > "Programmers waste enormous amounts of time thinking about, or
| worrying about, the speed of noncritical parts of their programs,
| and these attempts at efficiency actually have a strong negative
| impact when debugging and maintenance are considered. We should
| forget about small efficiencies, say about 97% of the time:
| premature optimization is the root of all evil. Yet we should not
| pass up our opportunities in that critical 3%. A good programmer
| will not be lulled into complacency by such reasoning, he will be
| wise to look carefully at the critical code; but only after that
| code has been identified. It is often a mistake to make a priori
| judgments about what parts of a program are really critical,
| since the universal experience of programmers who have been using
| measurement tools has been that their intuitive guesses fail.".
|
| Source: Structured Programming with go to Statements by Donald E.
| Knuth.
| http://web.archive.org/web/20130731202547/http://pplab.snu.a...
| throwaway0asd wrote:
| Perhaps a better understood paraphrase: _Never guess at
| performance, and don't measure for performance until the
| application works._
| djmips wrote:
| 1st part. YES. 2nd part. NO. If your end goal needs to meet a
| certain mark then you need to be measuring as you go. If you
| are headed down the wrong path, by not paying attention to
| perf you'll end up bogged down rewriting systems that 'work'
| but they were never going to be fast enough. I've been
| through this headache too many times now to not get a little
| irritated when projects don't keep perf and memory usage on
| the same importance as 'working'. This often manifest by
| developing on much more powerful hardware than will be
| shipped on...
|
| All you need is the measure part. Then you won't do the evil
| thing of optimizing something that makes it more brittle and
| hard to maintain etc.
| s1k3 wrote:
| Well it's not like you don't keep it in mind. Your project
| has some baseline performance need and you should be
| building with the common practices and paradigms you used
| to build the other parts. The point is, don't try to guess
| which part might need optimization tweaks. Write it first
| and then prove you need to spend extra time improving
| something.
| tejohnso wrote:
| Ya that paragraph is ripe for cherry picking that often quoted
| nugget. But right after it is "we should not pass up our
| opportunities in that critical 3%". And further along, "the
| universal experience of programmers who have been using
| measurement tools has been that their intuitive guesses fail."
| That one I think is the most important. You don't even know
| what you should optimize until you measure. You could end up
| spending a lot of time and energy on the wrong problem.
| zionic wrote:
| >A good programmer will not be lulled into complacency by such
| reasoning
|
| And that's the crux of it. Donald neglected to understand that
| most programmers will takeaway "optimization is bad" and all
| the subtleties of his point will be lost.
|
| Then we get things like electron (ducks for cover)
| ummonk wrote:
| Donald was also writing at a time when all languages were
| close to the metal and optimization meant using various
| tricks (or writing critical portions in assembly) to improve
| performance.
| Ygg2 wrote:
| > Then we get things like electron
|
| We get electron because it's a batteries included (and remote
| and TV) cross platform GUI.
|
| And has a huge talent pool, and acceptable performance for
| most users.
| Archelaos wrote:
| But even that has to be taken with a grain of salt. There are
| kinds of critical code that are hard to identify. Think of the
| infamous O(n^2) algorithms which are not identified unless you
| measure high values of n. Or not well coordinated threads that
| slow down a programm only on specific occasions.
| junon wrote:
| Yes, but conversely, there are cases where O(1), O(N) and
| O(N^2) are all perceptually the same in all cases of a
| particular program. There's a bit of an art to determining
| what is critical and non-critical.
| encryptluks2 wrote:
| Ansible is just slow in itself. I dread loops using Ansible,
| because I know there is a significant delay even on localhost.
| Just removing 20+ files in a loop with Ansible can take 20
| seconds when it is instantaneous using `rm`.
| jarym wrote:
| > I jumped to the conclusion that the yum update was just too
| slow and I needed to speed it up.
|
| That is not premature optimisation. That's called plain old
| jumping to conclusions. Next.
| MrStonedOne wrote:
| mildchalupa wrote:
| andrewstuart wrote:
| This famous saying was true when they were writing IBM operating
| systems in pure assembly language.
|
| Not true today.
|
| Head down some optimisation path instead of just getting the damn
| thing to work - that could sink the project.
|
| Today - speed is a feature.
| 0x457 wrote:
| Hmm, that's not premature optimization, that's lacking diagnostic
| skill problem.
| hbrn wrote:
| While premature optimization is still evil, I think it's no
| longer the root of it.
|
| Average quality of developer is way lower than it used to be
| (people aren't even capable of optimization). The need for
| optimization is also not as high: hardware is fast and cheap,
| abundance of Electron apps illustrates that.
|
| I have a feeling that today is the age of premature abstractions
| and premature scalability.
|
| The main point is that there's the right and the wrong time to do
| anything: optimization, scaling, testing, monitoring, etc. When a
| concept is deemed absolutely positive, naturally it will be
| applied prematurely.
| mjlawson wrote:
| I agree with this, mostly. I've worked at many a company where
| I've inherited the work of developers who built towards an
| expected view of the future. That expected future, of course,
| never quite lined up with the future that did come to pass. So
| a lot of extra work building out the wrong abstraction lead to
| a lot more work to undo it.
|
| I would disagree with you only that I'm not familiar with a
| time when premature monitoring was added that caused me a lot
| of pain. If anything, the opposite has been true. Can you
| explain what you mean there?
| hbrn wrote:
| It all comes down to cost. Most of the time you can get kind
| monitoring for free or at a very low cost. AWS gives you a
| bunch of metrics out of the box for every product. Wrap your
| webapp in newrelic-agent and get a bunch of nice dashboards.
| But the more you want to monitor, the higher the costs are.
|
| There's a lot of examples where you _can_ catch something
| with monitoring, but it doesn 't necessarily mean that you
| _should_.
|
| A recent one from my memory: in a SaaS product a team shipped
| a bug that went unnoticed for a few days. It was feature
| flagged, so it only affected a small fraction of customers
| and didn't trigger any global alerts. Now, since it didn't
| trigger alerts, the natural post-mortem action plan was
| "better monitoring". That would mean monitoring and alerting
| on "rate of errors by customer" (or "rate of errors by
| endpoint by customer", I don't remember).
|
| Given the usage pattern of the product, it was impossible to
| create a global monitor like that, we'd have manually
| configure it for each customer (and we had thousands of
| those). And even then, we'd inevitably be dealing with false
| positives every week.
|
| The right action plan was to learn from failure, but do
| nothing. We got extremely unlucky during infrastructure
| update, shit happens. We don't need to build a complex
| monitoring system that catches one bug every 5 years.
| firebaze wrote:
| The root of all evil is someone claiming to know the root of
| all evil.
|
| Snarky, but in my experience 6 sigma true.
| halayli wrote:
| > I have a feeling that today is the age of premature
| abstractions and premature scalability
|
| which are optimizations.
___________________________________________________________________
(page generated 2022-09-05 23:00 UTC)