[HN Gopher] "Clean" code, horrible performance
       ___________________________________________________________________
        
       "Clean" code, horrible performance
        
       Author : eapriv
       Score  : 169 points
       Date   : 2023-02-28 05:55 UTC (17 hours ago)
        
 (HTM) web link (www.computerenhance.com)
 (TXT) w3m dump (www.computerenhance.com)
        
       | sebstefan wrote:
       | I've seen "Composition over inheritance" more times than I've
       | seen "Polymorphism is good"
        
       | stunpix wrote:
       | So he puts polymorphic function calls into enormous loops to
       | simulate a heavy load with a huge amount of data to conclude "we
       | have 20x loss in performance _everywhere_ "? He is either a huge
       | troll or he has a typical fallacy of premature optimization: if
       | we would call this virtual method 1 billion times we will lose
       | hours per day, but if we optimize it will take less than a
       | second! The real situation: a virtual method is called only a few
       | hundred times and is barely visible in profiling tools.
       | 
       | No one is working with a huge amount of data in big loops using
       | virtual methods to take every element out of a huge dataset like
       | he is showing. That's a false pre-position he is trying to
       | debunk. Polymorphic classes/structs are used to represent some
       | business logic of applications or structured data with a few
       | hundred such objects that keep some states and a small amount of
       | other data so they are never involved in intensive computations
       | as he shows. In real projects, such "horrible" polymorphic calls
       | never pop up under profiling and usually occupy a fraction of a
       | percent overall.
        
         | Taniwha wrote:
         | Occasional CPU architect here .... probably the worst thing you
         | can do in your code is to load something from memory (the core
         | of method dispatch) and then jump to it, it sort of breaks many
         | of the things we do to optimise our hardware - it causes CPU
         | stalls, branch prediction failures, etc etc
         | 
         | There is one thing worse you can do (and I caught a C++
         | compiler doing it when we were profiling code while building an
         | x86 clone years ago) instead of loading the address and jumping
         | to it push the address then return to it, that not only breaks
         | pipelines but also return stack optimisations
        
         | scotty79 wrote:
         | > "we have 20x loss in performance everywhere"?
         | 
         | I guess only in the places you call methods?
         | 
         | Wait? Are they everywhere? Hmmm...
        
         | lightbendover wrote:
         | > No one is working with a huge amount of data in big loops
         | using virtual methods to take every element out of a huge
         | dataset like he is showing.
         | 
         | Things way worse than that exist. Replace "virtual method" with
         | "service call."
        
           | kyle-rb wrote:
           | "Don't make tons of RPCs" is a totally separate issue from
           | "don't make subclasses because virtual methods cost a few
           | extra cycles".
        
           | josephg wrote:
           | > Things way worse than that exist.
           | 
           | Yeah. I opened discord earlier, and it took about 10 seconds
           | to open. My CPU is an apple M1, running about 3ghz per core.
           | Assuming its single threaded (it wasn't), discord is taking
           | about 30 billion cycles to open. (Or around 50 network round-
           | trips at a 200ms ping).
           | 
           | Crimes against performance are everywhere.
        
             | gdprrrr wrote:
             | Have you measured CPU time? A very large factor will be
             | Disk IO and Network
        
               | rwalle wrote:
               | Exactly. The webpage is probably asking for resource from
               | 10 different servers and one of them is a bit slower than
               | the others, and the page rendering itself likely doesn't
               | take very long.
        
               | josephg wrote:
               | No; I have no idea why its so slow. Its kind of hard to
               | tell - I guess I could use wireshark to trace the
               | packets. But who cares? At least one of these things is
               | true:
               | 
               | - It makes horribly inefficient use of my CPU
               | 
               | - It needs an obscene number of network round-trips to
               | load
               | 
               | - One of the network servers that discord needs to open
               | takes seconds to respond to requests
               | 
               | This isn't a new problem. Discord always takes about 10
               | seconds to open on my computer. (Am I just on too many
               | servers?)
               | 
               | It should open instantly. Everything on modern computers
               | should happen basically instantly. The only reason most
               | software runs slowly is because the developers involved
               | don't care enough to make it run fast.
               | 
               | Except for a few exceptions like AI, scientific
               | computing, 3d modelling and video editing, modern
               | computers are fast enough for everything we want to do
               | with them. Software seems to have higher requirements
               | each year simply because the developers get faster
               | computers each year and spend less effort keeping their
               | software tight and lean.
        
           | icedchai wrote:
           | Query loops are also just as problematic: looping through a
           | result set, and making another query (or worse, N queries)
           | per result, etc.
        
         | adamrezich wrote:
         | > No one is working with a huge amount of data in big loops
         | using virtual methods to take every element out of a huge
         | dataset like he is showing.
         | 
         | this is exactly how the typical naive game loop/entity system
         | works.
        
           | TheBigSalad wrote:
           | Still... this isn't the reason games are slow.
        
             | adamrezich wrote:
             | I'm not sure why you would say that--it's certainly _a_
             | reason why _some_ (not AAA) games are slow.
        
         | MikeCampo wrote:
         | Just because you haven't been exposed this issue doesn't mean
         | it doesn't exist. "the real situation", "no one", "in real
         | projects", "never pop up"...give me a break lol.
        
           | mmarq wrote:
           | His classes are doing a single multiplication, of course
           | dynamic dispatch would have a significant cost in this
           | scenario.
        
           | kaba0 wrote:
           | One can reasonably well guess/know the expected input sizes
           | to their programs. You ain't (hopefully) loading your whole
           | database into memory, and unless you are writing a
           | simulation/game engine or another specialized application,
           | your application is unlikely to have a single scorching hot
           | loop, that's just not how most programs look like. If it is,
           | then you should design for it, which may even mean changing
           | programming languages for that part (e.g. for video codecs
           | not even C et al. cut it, you have to do assembly), but more
           | likely you just use a bit less ergonomic primitive of your
           | language.
        
             | duskwuff wrote:
             | > e.g. for video codecs not even C et al. cut it, you have
             | to do assembly
             | 
             | This is largely inaccurate. Video encoders/decoders are
             | typically written in C, with some use of compiler
             | intrinsics or short inline assembly fragments for
             | particularly "hot" functions.
        
               | kaba0 wrote:
               | I was talking about those hot functions only, not the
               | rest of the program. But yeah a "sometimes" or a "may"
               | would have helped in my original sentence.
        
               | Cthulhu_ wrote:
               | Exactly, and they only decided to write those bits in
               | assembly when they identified it as a hot code path AND
               | the assembly outperformed any C code they could come up
               | with.
               | 
               | The times that assembly outperforms a higher level
               | language has reduced as well over time, with compiler and
               | CPU improvements over time.
        
               | kaba0 wrote:
               | Which is not in disagreement with my comment/take.
        
             | bob1029 wrote:
             | > unless you are writing a simulation/game engine or
             | another specialized application, your application is
             | unlikely to have a single scorching hot loop
             | 
             | If everything was built with constraints like "this must
             | serve user input so quickly that they can't perceive a
             | delay", we would probably be a lot better off across the
             | entire board.
             | 
             | We should try to steal more ideas from different domains
             | instead of treating them like entirely isolated universes.
        
             | morelisp wrote:
             | > You ain't (hopefully) loading your whole database into
             | memory
             | 
             | I've basically built my career for the past decade by
             | pointing out "yes, we can load our whole working set into
             | memory" for the vast majority of problems. This is
             | especially true if you have so little data you think you
             | don't have CPU problems either.
        
               | kaba0 wrote:
               | Databases are often not used by a single entity, so while
               | I am very interested in your experiences, I think it is a
               | great specialization for certain problems, but is not a
               | general solution to everything.
               | 
               | All in all, I fail to see how it disagrees with my
               | points.
        
         | cma wrote:
         | The clean code methods can be even worse outside of tight
         | loops, in the tight loop all the vtable lookup stuff was
         | cached.
        
         | jiggawatts wrote:
         | > The real situation: a virtual method is called only a few
         | hundred times and is barely visible in profiling tools.
         | 
         | The reality is that the entire Java ecosystem revolves around
         | call stacks _hundreds of calls deep_ where most (if not all) of
         | those are virtual calls through an interface.
         | 
         | Even in web server scenarios where the user might be "5
         | milliseconds away", I've seen these overheads add up to the
         | point where it is noticeable.
         | 
         | ASP.NET Core for example has been optimised recently to go the
         | opposite route of not using complex nested call paths in the
         | core of the system and has seen dramatic speedups.
         | 
         | For crying out loud, I've seen Java web servers requiring 100%
         | CPU time across 16 cores for _half an hour_ to start up! HALF
         | AN HOUR!
        
         | Gluber wrote:
         | I agree with your sentiment. But those things exist (not that
         | that validates the authors argument) and I still shake in
         | terror when during covid I was asked to take a look at a virus
         | spread simulation (cellular automaton) that was written by a
         | university professor and his postdoc team for software
         | engineering at a large university that modeled evey cell in a
         | 100k x 100k grid as a class which used virtual methods for
         | every computation between cells. Rewrote that in Cuda and
         | normal buffers/ arrays.. and an epoch ran in milliseconds
         | instead of hours.
        
       | mgaunard wrote:
       | Most programming tends to decompose doing an operation for each
       | element in a sequence into implementing the operation for a
       | single element and then doing that repeatedly in a loop.
       | 
       | This is obviously wrong for performance reasons, as operations
       | tend to have high latency but multiple of them can run in
       | parallel, so many optimizations are possible if you target
       | bandwidth instead.
       | 
       | There are many languages (and libraries) that are array-based
       | though, and which translate somewhat better to how number
       | crunching can be done fast, while still offering pleasant high-
       | level interfaces.
        
       | gammalost wrote:
       | The video was removed and reuploaded. Here is the new link
       | https://www.youtube.com/watch?v=tD5NrevFtbU
        
       | t43562 wrote:
       | One can be tempted to like any assault on "Uncle Bob"'s insulting
       | videos in the light of working on a codebase where every 2nd line
       | forces you to jump somewhere else to understand what it does.
       | That sort of thing generates a rebellious feeling.
       | 
       | OTOH the class design lets someone come and add their new shape
       | without needing to change the original code - so it could be part
       | of a library that can be extended and the individual is only
       | concerned about the complexity of the piece they're adding rather
       | than the whole thing.
       | 
       | That lets lots of people work on adding shapes simultaneously
       | without having to work on the same source files.
       | 
       | If you don't need this then what would be the point of doing it?
       | Only fear that you _might_ need it later. That 's the whole
       | problem with designing things - you don't always know what future
       | will require.
        
       | readthenotes1 wrote:
       | "it is easier to make working code fast than to make fast code
       | work"
       | 
       | That was from either the 1960s or the 1970s and I don't know that
       | anything has changed in the human ability to read a mangled mess
       | of someone's premature optimizations.
       | 
       | "Make it work. Make it work right. Make it work fast." how to
       | apply the observation above...
        
       | jasode wrote:
       | The original submitted link was a youtube video that's been
       | deleted for some reason.
       | 
       | Probably a better link is the blog post because the author
       | updated it with the new replacement video a few minutes ago as of
       | this comment (around 09:12 UTC):
       | 
       | https://www.computerenhance.com/p/clean-code-horrible-perfor...
        
         | dsego wrote:
         | He re-uploaded the video for some reason.
        
         | dang wrote:
         | We eventually merged all the threads hither. Thanks!
        
       | rudolph9 wrote:
       | > We should forget about small efficiencies, say about 97% of the
       | time: premature optimization is the root of all evil. Yet we
       | should not pass up our opportunities in that critical 3%
       | 
       | https://dl.acm.org/doi/10.1145/356635.356640
       | 
       | The author of the post fails to articulate how we strike a
       | healthy balance and instead comes up with contrived examples to
       | prove points that only really apply to contrived examples.
        
       | athanagor2 wrote:
       | The performance difference is truly savage.
       | 
       | But I think there is a reason for the existence of "clean code"
       | practices: it makes devs easier to replace. Plus it may create a
       | market to try to optimize intrinsically slow programs!
        
         | spoiler wrote:
         | It doesn't _just_ make devs easier to replace. It makes it my
         | job more pleasant (and that of my colleagues). But yes, you 're
         | right. It does also help onboard people.
         | 
         | Imagine working as a barista with a disorganised bar, a mat on
         | the floor that keeps sliding and a corner is sticking up, and
         | one bag of beans where half the side is decaf and the other is
         | normal.
         | 
         | Now compare that to working in a more common sense coffee shop:
         | everything is in its place, the mat isn't decrepit, and you
         | have multiple bean bags.
         | 
         | In which one do you think it's easier to make coffee?
        
           | drainyard wrote:
           | It seems like you may be assuming that Casey is arguing
           | against writing clear code, which he is not. He is arguing
           | that you should just write the simple thing and usually that
           | is also the most clear, readable and "maintainable" code
           | because it is easy to get an overview of. So what he is
           | arguing for does not fit your coffee shop example, because of
           | course no one should write unreadable code. The argument is
           | that sometimes taking a step back from how you were taught to
           | write clean code, could be simplified in a way that is _also_
           | performant by default.
        
             | KyeRussell wrote:
             | This advice doesn't really differ from the actual 'source
             | material' though. It's really arguing against the people
             | that learned what "clean code" is from a blog post or a
             | Tweet or (most likely of all) another YouTuber that tried
             | to take a complex engineering topic that they don't have
             | the experience to understand, and shove it into a video-
             | listicle full of DigitalOcean ads and forced facial
             | expressions.
             | 
             | You see the same thing with microservices. Any of the
             | reading material by the big / original proponents of
             | microservices is actually quite good at giving you all the
             | reasons why they probably aren't for you. But that doesn't
             | stop the game of telephone that intercepts the message
             | before it gets do most developers.
             | 
             | So I really just see this whole thing as someone saying
             | "RTFM", rather than it being any sort of derived nuanced
             | take.
             | 
             | The sooner a professional software developer can get
             | themselves off the treadmill of garbage trendy educational
             | content, the better.
        
           | js6i wrote:
           | Huh? Coffee shops optimize for people not bumping into each
           | other and having related items close together, and don't
           | pretend to not know what kind of gear they have.. that's not
           | a terrible analogy to the exact opposite argument.
        
             | tizzy wrote:
             | I think the sentiment is that order and organisation is
             | helpful in achieving goals and cultivating a good working
             | environment as opposed to a big mess. Analogies, just like
             | abstractions, are leaky.
        
               | TeMPOraL wrote:
               | Yeah, but this one leaks a smart-matter paint that self-
               | assembles into a shape of text saying "the order and
               | organization is not the goal, but a consequence of
               | ruthlessly optimizing for performance above all".
        
       | hotBacteria wrote:
       | What is demonstrated here is that if you understand well the
       | different parts of some code, you can recombine them in more
       | efficient ways.
       | 
       | This is something very good to have in mind, but it must be
       | applied strategically. Avoiding "clean code" everywhere won't
       | always provide huge performances win and will surely hurt
       | maintainability.
        
       | helpfulmandrill wrote:
       | I take issue with the idea that maintainable code is about
       | "making programmers' lives easier", rather than "making code that
       | is correct and is easy to keep correct as it evolves".
       | Correctness matters for the user - indeed, sometimes it is a
       | matter of life and death.
        
         | Tomis02 wrote:
         | > I take issue with the idea that maintainable code is about
         | "making programmers' lives easier"
         | 
         | He's talking about "clean code", not maintainable code. The
         | claim that "clean code" is more maintainable is an unproven
         | assertion. Whenever I interact with a "clean code" codebase, it
         | is worse in every way compared to the corresponding "non-clean"
         | version, including in terms of correctness.
        
           | helpfulmandrill wrote:
           | Difficult to prove. I'm no OOP evangelist, but the "clean"
           | version in the video looks clearer to me.
        
       | Pr0ject217 wrote:
       | Casey brings another perspective. It's great.
        
       | zX41ZdbW wrote:
       | You don't have to give up clean code to achieve high performance.
       | The complexity can be isolated and contained.
       | 
       | For example, take a look at ClickHouse codebase:
       | https://github.com/ClickHouse/ClickHouse/
       | 
       | There is all sort of things: leaky abstractions, specializations
       | for optimistic fast paths, dispatching on algorithms based on
       | data distribution, runtime CPU dispatching, etc. Video:
       | https://www.youtube.com/watch?v=ZOZQCQEtrz8
       | 
       | But: it has clean interfaces, virtual calls, factories... And,
       | most importantly - a lot of code comments. And when you have a
       | horrible piece of complexity, it can be isolated into a single
       | file and will annoy you only when you need to edit that part of
       | the project.
       | 
       | Disclaimer. I'm promoting ClickHouse because it deserves that.
        
       | juliangmp wrote:
       | This example exists in such a vacuum and is so distant from real
       | software tasks that I just have to shake my head at the "clean
       | code is undoing 12 years of hardware evolution"
        
         | rafabulsing wrote:
         | It's an example from the Clean Code book itself, though?
        
           | rileymat2 wrote:
           | It is, but that's the catch. It would be exceedingly hard to
           | provide understandable examples of these things in code that
           | is complex enough to need the patterns.
        
       | adam_arthur wrote:
       | Most performance optimized code can be abstracted such that it
       | reads cleanly, regardless of how low level the internals get.
       | This is the entire premise of the Rust compiler/"zero cost
       | abstractions". Or how numpy is vectorized under the hood. No need
       | for the user to be exposed to this.
       | 
       | Writing "poor code" to make perf gains is largely unnecessary.
       | Though there are certainly micro-optimizations that can be made
       | by avoiding specific types of abstractions.
       | 
       | The lower level the code, the more variable naming/function
       | encapsulation (which gets inlined), is needed for the code to
       | read cleanly. Most scientific computing/algorithmic code is
       | written very unreadably, needlessly.
        
       | mabbo wrote:
       | I think the author is taking general advice and applying it to a
       | niche situation.
       | 
       | > So by violating the first rule of clean code -- which is one of
       | its central tenants -- we are able to drop from 35 cycles per
       | shape to 24 cycles per shape
       | 
       | Look, most modern software is spending 99.9% of the time waiting
       | for user input, and 0.1% of the time actually calculating
       | something. If you're writing a AAA video game, or high
       | performance calculation software then sure, go crazy, get those
       | improvements.
       | 
       | But most of us aren't doing that. Most developers are doing work
       | where the biggest problem is adding the next umpteenth features
       | that Product has planned (but hasn't told us about yet). Clean
       | code optimizes for improving time-to-market for those features,
       | and not for the CPU doing less work.
        
         | AbusiveHNAdmin wrote:
         | "So by violating the first rule of clean code -- which is one
         | of its central tenants"
         | 
         | The word is TENET. Tenants rent stuff.
        
         | bgirard wrote:
         | Having done a lot of performance work on Gecko (Firefox), we
         | generally knew where cycles mattered a lot (low level graphics
         | like rasterization, js, dom bindings, etc...) and we used every
         | trick there. But for the majority of the millions of LoC of the
         | codebase these details didn't matter like you say.
         | 
         | If we had perf issues that showed up outside they were higher
         | level design issues like 1) trying to take a thumbnail of the
         | page at full resolution for a tab thumbnail while loading
         | another tab, not because the thumbnailing code itself was slow,
         | or 2) running slow O(tabs) JS teardown during shutdown when we
         | could run a O(~1) clean up step instead.
        
         | bilvar wrote:
         | I don't know man, my TV has hardware several orders of
         | magnitude faster and more advanced than the hardware that took
         | us to the moon, and it takes dozens of seconds for apps like
         | Netflix or Amazon Prime Video to load dashboards / change
         | profiles or several seconds to do simple navigation or adjust
         | playback. People just don't know how to properly write software
         | these days, universities just churn out code monkeys with a
         | vicious feedback loop occurring at the workplaces afterwards.
        
         | yashap wrote:
         | 100%, I've done tonnes of (backend) performance optimization,
         | profiling, etc. on higher level applications, and the perf
         | bottlenecks have never been any of the things discussed in this
         | article. It's normally things like:
         | 
         | - Slow DB queries
         | 
         | - Lack of concurrency/parallelism
         | 
         | - Excessive serialization/deserialization (things like ORMs
         | that create massive in memory objects)
         | 
         | - GC tuning/not enough memory
         | 
         | - Programmer doing something dumb, like using an array when
         | they should be using a set (and then doing a huge number of
         | membership checks)
         | 
         | With that being said, I have worked on the odd performance
         | optimization where we had to get quite low level. For example,
         | when working on vehicle routing problems, they're super
         | computationally heavy, need to be optimized like crazy and the
         | hot spots can indeed involve pretty low level optimizations.
         | But it's been rare in the work I've done.
         | 
         | This article is probably meaningful for people who work on
         | databases, games, OSes, etc., but for most devs/apps these tips
         | will yield zero noticeable performance improvements. Just write
         | code in a way you find clean/maintainable/readable, and when
         | you have perf issues, profile them and ship the appropriate
         | fix.
        
           | slaymaker1907 wrote:
           | ORMs and slow DB queries kind of go hand in hand. Also, you'd
           | be surprised at how efficient arrays are for membership
           | checks so long as the number of items is even moderately
           | small (as a rough rule of thumb, the only thing that really
           | matters with these kinds of checks is how many cache lines
           | are involved).
        
           | fvdessen wrote:
           | If you program using design patterns that are 10x slower,
           | your application end up 10x slower, even after you've
           | optimised the hot spots away, and the profiler will not give
           | you any idea that it could be still 10x faster.
        
             | rileyphone wrote:
             | Not the case if your program is spending most of its time
             | waiting, which is typical these days.
        
           | jimbob45 wrote:
           | Well said. The aphorism "Premature optimization is the root
           | of all evil" is meant to mean "Build it right first, then
           | optimize only what needs to be optimized". There's really no
           | need to start cooking spaghetti right off the bat. Clean code
           | with some performance tweaks will be more maintainable in the
           | long run without sacrificing performance.
        
             | TeMPOraL wrote:
             | Casey's implied point is that clean code is already
             | sacrificing performance from the start. And of course real
             | life tells us that those "performance tweaks" will never
             | happen.
             | 
             | There is this popular wisdom that security must be designed
             | for from the start, and cannot be just added after the
             | fact. Performance is like that too, except worse, because
             | you actually _can_ add security after the fact - worst-
             | case, you treat the entire system as untrustworthy and wrap
             | a layer of security around it. You can 't do that with
             | performance - there is no way to sandbox your app so it
             | goes faster. You can only profile things then rip out the
             | slow parts and replace them with fast ones - how easy that
             | is depends on the architecture and approach you adopt early
             | in the project.
        
           | p1necone wrote:
           | Casey Muratori knows a _lot_ about optimizing performance in
           | game engines. He then assumes that all other software must be
           | slow because of the exact same problems.
           | 
           | I think the core problem here is that he assumes that
           | _everything_ is inside a tight loop, because in a game engine
           | that 's rendering 60+ times a second (and probably running
           | physics etc at a higher rate than that) that's almost always
           | true.
           | 
           | Also the fact that his example of what "everyone" supposedly
           | calls "clean code" looks like some contrived textbook example
           | from 20 years ago strains his credibility.
        
             | teki_one wrote:
             | The thing that sets him off is that he is using a computer
             | with enormous computing power and everything is slow.
             | 
             | He does have a narrow view, but it does not make his claims
             | invalid.
             | 
             | I liked that his POC terminal made in anger made the
             | Windows Terminal faster. But even in that context it was
             | clear that by making some tradeoffs - which the Windows
             | Terminal team can not make (99.99% of users do not run into
             | the issue, but Windows has to support everything) - it
             | could be even a lot faster.
             | 
             | So we live in a world where we cater for the many 1% use
             | cases, which do not overlap, but slows down everyone.
             | 
             | Many gamedevs do their own tools, because they are fed up
             | how slow iteration is. The same thing is happening at
             | bigger companies, at some point productivity start to
             | matter and off the shelf solutions start to fail.
        
             | jiggawatts wrote:
             | Casey makes the point that you don't have to hand-tune
             | assembly code, but instead just write the _simpler code_.
             | It 's easier to write, easier to read, and runs faster too!
             | 
             | If there's something wrong with that advice, I can't
             | imagine what it is...
        
             | magicalhippo wrote:
             | Then again, thing's aren't in the critical path until
             | suddenly they are.
             | 
             | Regardless of scenario I will never willingly do a O(n^2)
             | sort when writing new code. Just in case those 10 items
             | suddenly turn to 10000 one day.
        
         | nikanj wrote:
         | But somehow modern software (Outlook, I'm looking at you) has
         | trouble keeping up at my typing speed, and there's a visible
         | delay before characters appear on the screen. It doesn't matter
         | what the software does 99.9% of the time, if it's an utter pig
         | that crucial 0.1% of the time when the user is providing input.
        
         | mquander wrote:
         | > Look, most modern software is spending 99.9% of the time
         | waiting for user input, and 0.1% of the time actually
         | calculating something. If you're writing a AAA video game, or
         | high performance calculation software then sure, go crazy, get
         | those improvements.
         | 
         | That's really not even close to true. Loading random websites
         | frequently costs multiple seconds worth of local processing
         | time, and indeed, that's often because of the exact kind of
         | overabstraction that this article criticizes (e.g. people use
         | React and then design React component hierarchies that seem
         | "conceptually clean" instead of ones that perform a rendering
         | strategy that makes sense.)
        
           | mabbo wrote:
           | I don't disagree that the balance is shifting towards "why is
           | this taking so long". There's ebbs and flows in that
           | ecosystem.
           | 
           | But overall, I think you overestimate how much time you spend
           | loading the website and how much time it's just sitting
           | there, mostly idle.
           | 
           | And in the end, as long as it's fast enough that users don't
           | stop using the site/webapp/program/whatever, then it's fine,
           | imho. When it becomes too slow, the developers will be asked
           | to improve performance. Because in the end, economics is the
           | driver, not performance.
        
             | jstimpfle wrote:
             | And depending on what you do, economics will let
             | performance decide who wins. Git is probably one example. A
             | different example, have you ever compared battery lifes of
             | smartphones before buying one?
        
             | ilyt wrote:
             | > But overall, I think you overestimate how much time you
             | spend loading the website and how much time it's just
             | sitting there, mostly idle.
             | 
             | reading the website is productive. Waiting on it to load is
             | not.
             | 
             | I'd rather have app burn a whole core whole time if it cut
             | 1s off load time when clicking something.
        
           | commandlinefan wrote:
           | Well, you can make things run even faster by hand-coding it
           | in assembler... but performance isn't the reason we use high
           | level languages. I agree with you that ignoring performance
           | characteristics in favor of speed-to-market is an awful and
           | pervasive practice in modern software development, but the
           | linked article isn't talking about or making _that_ case at
           | all. He 's saying that he can make his own custom object
           | oriented C language that runs faster than C++ itself, but
           | that's not news - people were saying that in 1995 (at least).
           | The maintainability hit isn't worth it.
        
             | pmarin wrote:
             | This is not his own custom object oriented C language, this
             | is just a very well known idiom for implementing
             | polymorphic functions in C.
        
           | WatchDog wrote:
           | Most of these websites have neither clean code nor hyper
           | optimisation, just bad code all round.
        
           | onion2k wrote:
           | For what it's worth, less than 4% of websites use React
           | (approximately 4% use _any_ JS framework) . If you believe
           | the web is slow because of React you are wrong. It 's not
           | even due to JS.
        
             | gratitoad wrote:
             | I'm curious, where do you get those numbers from? Those are
             | shockingly low numbers and don't align with my
             | observations, but I also don't have hard figures to back
             | them up.
        
             | ramesh31 wrote:
             | >It's not even due to JS.
             | 
             | Oh it definitely is. But no single framework (or
             | frameworks) are to blame. It's the CMS. The thing that
             | allows every Tom, Dick, and Harry at the company to drop
             | their little snippet for this or that which adds up to a
             | mountain of garbage over time, half of which is disused or
             | forgotten.
        
             | valzam wrote:
             | 4% of what? I would wager a guess that close to 100% of
             | Alexa top 1000 websites us a JS framework and a significant
             | portion of that something heavy like React or Angular.
        
             | [deleted]
        
           | simplotek wrote:
           | > That's really not even close to true. Loading random
           | websites frequently costs multiple seconds worth of (...)
           | 
           | You attempted to present an argument that's a textbook
           | example of an hyperbolic fallacy.
           | 
           | There are worlds of difference between "this code does not
           | sit in a hot path" and "let's spend multiple seconds of local
           | processing time".
           | 
           | This blend of specious reasoning is the reason why the first
           | rule of software optimization is "don't". Proponents of
           | mindlessly going about the 1% edge cases fail to understand
           | that the whole world is comprised of the 99% of cases where
           | shaving off that millisecond buys you absolutely nothing,
           | with a tradeoff of producing unmaintainable code.
           | 
           | The truth of the matter is that in 99% of the cases there is
           | absolutely no good reason to run after these relatively large
           | performance improvements if in the end the user notices
           | absolutely nothing. Moreso if you're writing async code that
           | stays far away from any sort of hot path.
        
             | loup-vaillant wrote:
             | I realised when I implemented EdDSA for Monocypher that
             | optimisations compound. When I got rid of a bottleneck, I
             | noticed that another part of the code was the new
             | bottleneck, and some of the optimisations compounded
             | _multiplicatively_. It took many changes before I finally
             | started to hit diminishing returns and stop. All while
             | restricting myself to standard C99, and trying fairly hard
             | not to spend too many lines of code on this.
             | 
             | My point being, if most of the program is slowed down by a
             | slew of wasted CPU cycles (costly abstractions, slow
             | interpreted language...), there's a good chance what should
             | have been obvious bottlenecks get drowned in a see of
             | underperformance. They're harder to spot, and fixing them
             | doesn't change much.
             | 
             | So before you even _get_ to actual optimisation, your
             | program should be fast enough that actual optimisations
             | have a real impact. And yes, actual optimisation should be
             | done quite rarely. But first, we need to make sure our
             | programs aren 't as slow as molasses. See
             | https://www.youtube.com/watch?v=pgoetgxecw8
        
             | heisenbit wrote:
             | How does an organization which has been built on 99% not
             | doing optimization recognize the one percent where it
             | matters a lot?
        
               | eropple wrote:
               | In my experience: a profiler, usually. Just because I
               | _can_ throw down a lot of code quickly doesn 't mean I
               | don't have the tools to analyze code when I go "hmm, that
               | seems slow".
        
           | PragmaticPulp wrote:
           | > That's really not even close to true. Loading random
           | websites frequently costs multiple seconds worth of local
           | processing time
           | 
           | Unless we're talking about specific compute-intensive
           | websites, this is almost certainly network loading latency.
           | 
           | Modern web browsers are very fast. Moderns CPUs are very
           | fast. Common, random websites aren't churning through
           | "multiple seconds" of CPU time just to render.
        
             | fvdessen wrote:
             | I once optimised a SPA app that had to be really fast for
             | usability reasons (industrial use), I replaced all the
             | 'high level' JS patterns such as map, filter, and frontend
             | framework things to use just if else and for loops and
             | native dom manipulation, and it ended up more than 10x
             | faster, each click would update the app in one frame, it
             | was very noticeable. So yes CPU cycles do matter for
             | websites, even with modern hardware. However the code was
             | more verbose and needed a lot more technical know-how to
             | understand and maintain.
        
             | gary_0 wrote:
             | I just loaded cnn.com while looking at the CPU utilization
             | graph: 50% use of my 8 logical cores at 4.5GHz for the
             | better part of a second. So no, it's not just network
             | latency. Doing multiple parallel network requests, parsing,
             | doing layout and applying styles, running scripts, decoding
             | media... a modern website and browser _devours_ CPU time.
        
               | gowld wrote:
               | AdBlocker on or off?
        
               | smoyer wrote:
               | lite.cnn.com
        
               | jiggawatts wrote:
               | Jira cloud famously takes 30-60 seconds on even a fairly
               | high-end laptop, which is just staggering. I can install
               | an entire operating system into a virtual machine in that
               | time.
        
               | datavirtue wrote:
               | So those who visit Fox and CNN are helping to destroy the
               | planet. Or is it the developers fault? Can't be the
               | developers.
        
               | replygirl wrote:
               | one of them must be to blame! readers and publishing
               | engineers famously sit atop corporate decision-making
               | hierarchies, and no one else in a mass media enterprise
               | ever did anything wrong, certainly not before the web was
               | a thing
        
           | rodrigobellusci wrote:
           | Since you bring up React in your example, which framework
           | should one use to build better performing web apps?
           | 
           | I know React tends to lack in both dev UX and performance (at
           | least in my exp). Personally I've taken a look at Svelte and
           | Solid, and liked them both. I haven't had the chance to build
           | anything larger than a toy app, though.
        
             | datavirtue wrote:
             | I recently tried Vue 3 at a startup for a new app. A few
             | days after starting it I rewrote a personal Svelte app in
             | Vue 3 since I found it so fluid (Composition API w/ script
             | setup). I was liking Svelte before that.
        
             | antoineMoPa wrote:
             | Vue could also be an option, but I personally want to learn
             | some Solid, as I see it could be preferred by the current
             | mass of React frontend developers, more than Svelte and
             | Vue. The syntax and philosophy of Solid looks closer to
             | React, while having a stronger focus on performance.
        
             | jay_kyburz wrote:
             | You should just try working directly in HTML.
        
               | eropple wrote:
               | I look forward to returning to the days where everybody
               | wrote their own slightly-to-significantly-wrong state
               | management tooling while being distracted by the minutiae
               | of DOM wrangling. That was a good time.
               | 
               | (It was not. It was why I stopped doing frontend work.)
        
         | matheusmoreira wrote:
         | Most software can barely keep up with the display's refresh
         | rate due to their unacceptably high latencies.
        
         | greysphere wrote:
         | >Look, most modern software is spending 99.9% of the time
         | waiting for user input, and 0.1% of the time actually
         | calculating something.
         | 
         | All that says is you should focus your energy on the increasing
         | the value of .1%. It's not actually an argument to not spend
         | any energy.
         | 
         | It's like saying 'Astronauts only spend .1% of their time in
         | space' or 'tomatoes only spend .1% of their existence being
         | eaten' - that .1% is the whole point.
         | 
         | You can debate how best to maximize that value, more features
         | or more performance. The OP is suggesting folks are just
         | leaving performance on the floor and then making vacuous
         | arguments to excuse it.
        
         | bakugo wrote:
         | > Look, most modern software is spending 99.9% of the time
         | waiting for user input, and 0.1% of the time actually
         | calculating something
         | 
         | It's because of this mentality that almost all desktop software
         | nowadays is bloated garbage that needs 2GB of RAM and a 5Ghz
         | CPU to perform even the most basic task that could be done with
         | 1/100th of the resources 20 years ago.
        
           | doodlesdev wrote:
           | No, it's not because of this mentality. Remember the timeless
           | quote:                  "We should forget about small
           | efficiencies, say about 97% of the time: premature
           | optimization is the root of all evil.              Yet we
           | should not pass up our opportunities in that critical 3%."
           | 
           | Using Electron and bloated frameworks that "abstract" things
           | away is the biggest problem for most modern software, not the
           | fact the developers aren't optimizing 10 cycles from the CPU
           | away. It's a fundamental issue of how the software is made
           | and not what the code is. If you need to run a whole web
           | browser for your application, you already lost, there is no
           | optimization that you can do there.
           | 
           | I think the sibling comment here in this thread shows what
           | some developers think. Electron is not reasonable.
           | "Developers" using Electron should be punished by CBT and
           | chinese torture.
           | 
           | Okay maybe that's too much. But I suggest a new quote:
           | "We should forget about Electron, say about 100% of the time:
           | electron is the root of all evil.              Yet we should
           | not pass up our opportunities in rewriting everything in
           | Rust."
        
           | bitwize wrote:
           | RAM and GPU are cheap. Most users aren't going to notice.
           | Meanwhile by choosing Electron, the developers were able to
           | roll out the app on Windows, Mac, and Linux at nearly zero
           | marginal cost per additional platform.
        
             | FridgeSeal wrote:
             | > Most users aren't going to notice.
             | 
             | And other lies you can tell yourself to sleep at night.
             | 
             | Most people notice. Very few have the capacity, power (or
             | realise) to complain about it. They accept what they've
             | been given, despite how awful it is, because they have
             | basically no other option.
             | 
             | A concrete example: my previous work had to use bitbucket
             | pipelines for our docker builds. My current work uses
             | GitHub actions. GH has my container half-built before I can
             | even click through to the page. Bitbucket took a good
             | minute to start. My complaints about bitbucket fell on deaf
             | ears in the business, and no amount of leaving feedback for
             | Atlassian to "please make builds faster" ever made the
             | slightest amount of difference. Every time MS teams comes
             | up that's met with complaints about performance (among
             | other things) so people definitely notice that. VS Code
             | gets celebrated for "actually having decent performance",
             | so the bar is so low that even moderate performance apps
             | receive high praise.
             | 
             | Users definitely notice performance, whether the
             | PM/business cares is a different matter, but we should stop
             | deceiving ourselves by saying it's alright because "users
             | won't notice or care".
        
         | Aeolun wrote:
         | I'm always glad to save nanoseconds on my code so that we get
         | the absolute best performance out of that 10s long call to the
         | legacy API.
        
         | ilyt wrote:
         | > Look, most modern software is spending 99.9% of the time
         | waiting for user input, and 0.1% of the time actually
         | calculating something. If you're writing a AAA video game, or
         | high performance calculation software then sure, go crazy, get
         | those improvements.
         | 
         | CPU meter when clicking anything on "modern" webpage proves
         | that's a lie.
         | 
         | Also, sure, even if "clicking on things" is maybe 1-5% vs
         | "looking at things" _THAT 'S THE CRITICAL PATH_.
         | 
         | Once the app rendered a view _obviously_ it is not doing much
         | but user is also not waiting on anyting and is  "being
         | productive", parsing whatever is displayed.
         | 
         | The critical path, the _wasted time_ is the time app takes to
         | render stuff and  "but 99% of the time is not doing it" is
         | irrelevant.
        
           | [deleted]
        
         | Cthulhu_ wrote:
         | An oft recited of thumb: Make it work, make it pretty, make it
         | fast - in that order. That is, performance bottlenecks are
         | easier to find and fix if your code is clean to begin with.
         | 
         | I sometimes wish performance was an issue in the projects I
         | work with, but if it is, it's on a higher level / architectural
         | level - things like a point-and-click API gateway performing
         | many separate queries to the SAP server in a loop with no
         | short-circuiting mechanism. That's billions of lines of code
         | being executed (I'm guessing) but the performance bottleneck is
         | in how some consultant clicked some things together.
         | 
         | Other than school assignments, I've never had a situation where
         | I ran into performance issues. I've had plenty of situations
         | where I had to deal with poorly written ("unclean") code and
         | spent extra brain cycles trying to make sense of it though.
        
           | scotty79 wrote:
           | I heard it as
           | 
           | make it work
           | 
           | make it work correctly
           | 
           | make it work fast
           | 
           | Pretty was never in the picture. But if anyone wants to add
           | it, it should come last.
        
             | treis wrote:
             | I like
             | 
             | Make it work
             | 
             | Make it good
             | 
             | Make it fast
        
         | drivers99 wrote:
         | It still matters because in your example it will affect how
         | smoothly the computer responds once it gets the user input.
        
           | xupybd wrote:
           | But how much does that matter?
           | 
           | If you're scaling to 1000s of users then yes. If you have a
           | GUI for a monthly task that two administrators use, then no.
           | 
           | The less something gets used the longer the payback time on
           | the initial development.
        
             | TeMPOraL wrote:
             | > _If you have a GUI for a monthly task that two
             | administrators use, then no._
             | 
             | Fine, but be honest with yourself and admit that you are
             | contributing a lot to making the lives of those two admins
             | miserable.
             | 
             | It doesn't matter if I'm using your software once a month,
             | or once a day. If it's anything like typical modern
             | software, it will make me hate the task I'm doing, and hate
             | you for making it painful. In fact, shitty performance may
             | be the very reason I'm using it monthly instead of daily -
             | because I reorganized my whole workflow around minimizing
             | the frequency and amount of time I have to spend using your
             | software.
        
             | layer8 wrote:
             | You're not wrong, but today's software is so slow/high
             | latency so often, despite incredibly powerful hardware,
             | that as a rule it should absolutely matter.
        
               | datavirtue wrote:
               | Maybe it's because they used AWS Lambda and API Gateway
               | for the API?
        
         | FooBarWidget wrote:
         | Not only time to market, but also maintainability.
         | 
         | In non-performance-critical areas, it's pretty important that
         | when the original dev team leaves, new hires can still fix bugs
         | and add features without breaking things.
        
           | the-smug-one wrote:
           | I don't see how the code snippets presented are less
           | maintainable.
        
         | AnIdiotOnTheNet wrote:
         | > Look, most modern software is spending 99.9% of the time
         | waiting for user input
         | 
         | If that's true, why does it take forever to load and frequently
         | fail to keep up with my input?
        
           | chpatrick wrote:
           | Often it's because of bad (quadratic) algorithms, not because
           | the code isn't micro-optimized. For example:
           | https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-
           | times...
        
             | morelisp wrote:
             | Accidentally super-linear algorithms happen a lot more when
             | you hide stuff behind three layers of interfaces rather
             | than seeing "oh, this uses a list instead of a set".
        
               | jstimpfle wrote:
               | True story. And after taking a couple jobs optimizing
               | some messy software modules I've learned to not take
               | those jobs anymore. The only solution is to throw the
               | crap out of the window and to start afresh with a clean
               | understanding of what we want to achieve. A simple and
               | straightforward solution without BS abstractions is not
               | only much faster but also much less bug-ridden.
        
             | AnIdiotOnTheNet wrote:
             | It isn't about "micro-optimization", that's just what bad
             | developers use as an excuse for never caring at all about
             | performance. Casey uses the term "depessimization" to
             | describe the process of making a program not run like shit.
             | 
             | Modern computers are ludicrously fast, and modern
             | developers have somehow managed to make them slow.
             | 
             | Regardless, these programs _should_ be spending 99% of
             | their time waiting for user input, but instead they 're
             | working data through a mountain of abstraction layers on
             | the faulty assumption that it is a) saving developers'
             | time, and 2) that that is worth much more than user time.
        
         | scotty79 wrote:
         | What does it matter if it does this 0.1% ten times slower than
         | it could? Then user will have to wait for the software which
         | slows the most expensive component of the whole work setup, the
         | human.
        
         | ajmurmann wrote:
         | Don't forget the 90% of the processing time that it's waiting
         | for a DB response
        
           | FridgeSeal wrote:
           | I'd like to work in one of these teams/products where the
           | database is the bottleneck. Basically everywhere I've worked,
           | I've had to work with backends that have been foot-draggingly
           | slow compared to the actual database.
        
           | TeMPOraL wrote:
           | Except when it isn't. I remember an article making rounds the
           | other day, that claimed the whole "most software spend most
           | time waiting on I/O" common wisdom is no longer true, as most
           | software these days is CPU-bound, and a good chunk of that is
           | parsing JSON.
        
           | ilyt wrote:
           | Eh, that heavily depends on language and dataset you're
           | working with. I've seen "simple" data with some fat thing
           | like RoR on top of it having 10x the latency of the
           | underlying database after all the ORMing.
        
         | sarchertech wrote:
         | >Clean code optimizes for improving time-to-market for those
         | features
         | 
         | Does it though? Where's the evidence for it? The vast majority
         | of people I've worked with over the last couple decades who
         | like to bring up "clean code", tend towards the wrong
         | abstractions and over abstracting.
         | 
         | I almost always prefer working with someone who writes the kind
         | of code Casey was than someone who follows the clean code
         | examples I've spent my career dealing with. I've seen and
         | worked with many examples of Data Oriented Design that were far
         | from unmaintainable or unreadable.
        
           | grumpyprole wrote:
           | Completely agree. These rules simply do not lead to better
           | outcomes in all cases. Looking at the rules and playing
           | Devil's advocate for fun:
           | 
           | > Prefer polymorphism to "if/else" and "switch"
           | 
           | Algebraic data types and pattern matching (a more general
           | version of switch), make many types of data transformation
           | far easier to understand and maintain (versus e.g. the
           | visitor pattern which uses adhoc polymorphism).
           | 
           | > Code should not know about the internals of objects it's
           | working with
           | 
           | This is interpreted by many as "don't expose data types".
           | Actually some data types are safe to expose. We have a JSON
           | library at work where the actual JSON data type is kept
           | abstract and pattern matching cannot be used. This is despite
           | the fact that JSON is a published (and stable) spec and
           | therefore already exposed!
           | 
           | > Functions should be small
           | 
           | "Small" is a strange metric to optimise for, which is why I
           | don't like Perl. Functions should be readable and easy to
           | reason about. Let's optimise for "simple" instead.
           | 
           | > Functions should do one thing
           | 
           | This is not always practical or realistic advice. Most
           | functions in OOP languages are procedures that will likely
           | perform side effects in addition to returning a result (e.g.
           | object methods). Should we also not do logging? :)
           | 
           | > "DRY" - Don't Repeat Yourself
           | 
           | The cost of any abstraction must be weighed up against the
           | repetition on a case-by-case basis. For example, many
           | languages do not abstract the for-loop and effectively
           | encourage users to write it out over and over again, because
           | they have decided that the cost of abstracting it (internal
           | iteration using higher-order functions) is too high.
        
             | rrobukef wrote:
             | my 2 cents: I don't see algebraic data types as strictly
             | superiour. It's just the other side of the polymorphic
             | coin: there is open and closed ploymorphism. Open
             | polymorphism happens with interfaces, inheritance, and
             | typeclasses- the number is unlimited. Closed happens with
             | ADTs - an enumeration of the cases.
             | 
             | Open is great for extensibility: libraries can be
             | precompiled, plugins are possible. Changes don't propagate
             | - it is ""forward compatible"" which is great for
             | maintaning the code.
             | 
             | Closed on the other hand is great for matching. Finite is
             | predictable & faster. Finite is self-contained and self-
             | describing because it exposes the data types without shame.
             | 
             | The purpose of the visitor pattern is now clear: it closes
             | the open polymorphism for a finite set. Great, now we only
             | need one kind and we still get matching. Or is it the worst
             | of two worlds? Slow & incomprehensible and all changes
             | propagate everywhere.
             | 
             | So which one is better? Neither. But the reality is that
             | all old imperative languages with polymorphism chose the
             | open kind - the kind that adds more features because it was
             | needed for shared libraries. Leaving you to build any
             | pattern matching yourself and to burn yourself with the
             | unmaintainable code.
             | 
             | If people get burned, they learn. First they say don't do
             | that and only then they replace the gas stove by induction.
        
               | grumpyprole wrote:
               | > So which one is better? Neither.
               | 
               | I agree. My point was only that "Clean Code" tries to
               | claim one is always better.
        
           | Gluber wrote:
           | You are confounding three separate skills. Finding the right
           | abstractions is an art, whether you write clean code or not.
           | Writing high performance code is another art.
           | 
           | A really good developer writes clean code using the right
           | abstraction (finding those tends to take the most time and
           | experience) and drop down to a different level of abstraction
           | for high performance areas where it makes sense.
           | 
           | The fact that bad developers suck and write bad code no
           | matter if they use clean code or not does not reflect on the
           | methodology
        
             | datavirtue wrote:
             | [dead]
        
             | sarchertech wrote:
             | If there are no hard measurements I can use to determine
             | the value of "clean code" then I fall back on the results
             | of produced by people who say they are writing clean code.
             | There is no realistic way to objectively measure coding
             | styles completely isolated from the people writing the
             | code.
             | 
             | I personally haven't seen value from that coding style.
             | There may be some platonic ideal clean code that is better
             | than other methodologies in theory--it is likely that my
             | sample is biased--but from what I've seen, the clean code
             | style tends to lead most developers towards over
             | abstraction.
        
         | overgard wrote:
         | I would argue that ignoring performance, a lot of "clean" code
         | isn't really that much clearer and more maintainable at all (At
         | least by the Robert Martin definition of "Clean Code"). Things
         | like dependency injection and runtime polymorphism can make it
         | really hard to trace exactly what happens in what sequence in
         | the code, and things like asynchronous callbacks can make
         | things like call stacks a nightmare (granted, you do need the
         | latter a lot). Small functions can make code hard to understand
         | because you need to jump to a lot of places and lose the thread
         | on the sequential state (bonus points if the small functions
         | are only ever used once by one other function). The more I work
         | in code bases the more I find that overusing these "clean"
         | ideas can obscure the code more than if it was just written
         | plainly. I think a lot of times, if a technique confuses
         | compiler optimization and static code analysis, it's probably
         | going to confuse humans also.
        
         | randomdata wrote:
         | _> I think the author is taking general advice_
         | 
         | Not just general advice, but advice that is meant to be applied
         | to TDD specifically. The principals of clean code are meant to
         | help with certain challenges that arise out of TDD. It is often
         | going to seem strange and out of place if you have rejected
         | TDD. Remember, clean code comes from Robert C. Martin, who is a
         | member of the XP/Agile gang.
         | 
         | The author cherry-picking one small piece out of what Uncle Bob
         | promotes and criticizing it in isolation without even
         | mentioning why it is suggested with respect to the bigger
         | picture seemed disingenuous. It does highlight that testable
         | code may not be the most performant code, but was anyone
         | believing otherwise? There are always tradeoffs to be made.
        
       | andix wrote:
       | Don't optimize early. 99% of code doesn't have to be fast, it has
       | to be right. And as code needs to be maintained, it also needs to
       | be easy to read/change, so it stays right.
       | 
       | You shouldn't do things that make your code utterly slow though.
        
       | totorovirus wrote:
       | I think juniors should write clean code until they can negotiate
       | the technical debt against the speed
        
       | gumby wrote:
       | These days the cost of a programmer is probably a lot greater
       | than the cost of execution, so some of these rules ("prefer
       | polymorphism") are likely worth the tradeoff.
        
         | CyberDildonics wrote:
         | Cost of execution to who? If you don't care about the speed of
         | your program when I execute it, we end up with electron based
         | VPN GUIs that have menus that run at a few frames per second or
         | electron based disk formatters that are a 400 MB download to
         | ultimately run a command line process.
         | 
         | If you don't care about execution speed, I don't want to use
         | it.
        
       | gregjor wrote:
       | Brilliant. Thanks.
       | 
       | tl;dr polymorphism, indirection, excessive function calls and
       | branching create a worst-case for modern hardware.
        
       | flavius29663 wrote:
       | One of the the points of "clean code" is to make it easy to find
       | the hotspots and optimize those. Write the codebase at a very
       | high level, plenty of abstractions etc. and then optimize the 1%
       | that really needs it. Optimizing a small piece of software is not
       | going again clean code, on the contrary, it re-enforces it: you
       | can spend the time to optimize only what is necessary.
        
       | fer wrote:
       | Unrelated to the content itself, am I the only one wondering if
       | he has his t-shirt mirrored or if he's really skilled at writing
       | right-to-left?
       | 
       | Content wise: his examples show such increases because they're
       | extremely tight and CPU-bound loops. Not exactly surprising.
       | 
       | While there will be gains in in larger/more complex software by
       | throwing away some maintainability practices (I don't like the
       | term "clean code"), they will be dwarfed by the time actually
       | spent on the operations themselves.
       | 
       | Just toss a 0.01ms I/O operation in those loops; it will throw
       | the numbers off by a large margin, then one would just rather
       | pick sanity over the speed gains without blinking.
       | 
       | That said, if a code path is hot and won't change anytime soon,
       | by all means optimize away.
       | 
       | Edit: the upload seems to have been deleted.
        
         | vblanco wrote:
         | its custom made mirrored tshirt, he is writing on one of those
         | lightboards and then mirroring.
        
         | mariusor wrote:
         | He created the t-shirts specially for StarCode Galaxy[1] which
         | is a longer form class for C++ programming in the same veign as
         | the current video, but with a much wider scope. (As far as I
         | know SCG is not released yet).
         | 
         | As a small, amusing, s(n)ide note, Casey's rants about the
         | slowness of the Windows terminal[2] that ended up in Microsoft
         | releasing an improved version[3], were based him wanting to
         | implement a TUI game as an exercise in SCG and the terminal
         | being too slow.
         | 
         | [1] https://starcodegalaxy.com/ [2]
         | https://news.ycombinator.com/item?id=31284419 [3]
         | https://news.ycombinator.com/item?id=31372606
        
       | boredumb wrote:
       | I've seen clean code lead to over-architected and unmaintainable
       | nightmares that ended up with incorrect abstractions that became
       | much more of a problem than performance.
       | 
       | The more the years pile up the more I agree with the sentiment in
       | this post, generally going for something that works and is as
       | optimal of code as I would get if I was to come back to make it
       | more performance oriented in the future, I end up with something
       | generally as simple as I can get. Languages and libs are
       | generally abstract enough in most cases and any extra design
       | patterning and abstracting is generally going to bite you in the
       | ass more than it's going to save you from business folks coming
       | in with unknown features.
       | 
       | I suppose, write code that is conscious of its memory and CPU
       | footprint and avoid trying to guess what features may or may not
       | reuse what parts from your existing routines, and try even harder
       | to avoid writing abstractions that are based on your guesses of
       | the future.
        
         | commandlinefan wrote:
         | > I've seen clean code lead to over-architected and
         | unmaintainable nightmares
         | 
         | I agree, but surely you wouldn't really say that his end-state
         | in this particular article is _more_ maintainable than the
         | starting point.
        
       | unconed wrote:
       | The example of using shape area seems like a poor choice.
       | 
       | First off, the number of problems where having an analytical
       | measure of shape area is important is pretty small by itself.
       | Second, if you do need to calculate area of arbitrary shapes,
       | then limiting yourself to formulas of the type `width * height *
       | constant` is just not going to cut it. And this is where the
       | entire optimization exercise eventually leads: to build a table
       | of precomputed areas for affinely transformed outlines.
       | 
       | Throw in an arbitrary polygon, and now it has to be O(n). Throw
       | in a bezier outline and now you need to tesselate or integrate
       | numerically.
       | 
       | What this article really shows is actually what I call the curse
       | of computer graphics: if you limit your use cases to a very
       | specific subset, you can get seemingly enormous performance gains
       | from it. But just a single use case, not even that exotic, can
       | wreck the entire effort and demand a much more complex solution
       | which may perform 10x worse.
       | 
       | Example: you want to draw lines? Easy, two triangles! Unless you
       | need corners to look good, with bevels or rounded joins, and
       | every pixel to only be painted once.
       | 
       | Game devs like to pride themselves on their performance chops,
       | but often this is a case of, if not the wrong abstraction, at
       | least too bespoke an abstraction to allow future reuse
       | irrespective of the use case.
       | 
       | This leads to a lot of dickswinging over code that is, after
       | sufficient encounters with the real world, and sufficient
       | iterations, horrible to maintain and use as a foundation.
       | 
       | So caveat emptor. Framing this as a case of clean vs messy code
       | misses the reason people try to abstract in the first place. OO
       | and classes have issues, but performance is not the most
       | important one at all.
        
         | Olreich wrote:
         | To summarize: if you make the problem complex enough, then the
         | specific performance methods used in the article don't work.
         | But hey, why not take your complex example? If you apply a
         | polymorphic approach to Bezier curves and polygons, then you
         | still get a 1.5x slowdown compared to a switch statement. If
         | you have any commonality between your implementations of area
         | for them, it's harder to find, which could be worth 2x or more.
         | If there is a common algorithm for calculating all polygons and
         | curves, wasting work for simple shapes, but vastly improving
         | cache and predictor performance, then you could be leaving
         | another 5x on the table. Orienting the code around operations
         | instead of types is still a win for performance with similar
         | cognitive load compared to type hierarchies.
         | 
         | You're right that once you've done the 15x performance gain
         | that Casey demonstrates, the code is pretty brittle and prone
         | to maintenance problems if the requirements change a lot. But I
         | think we can have our cake and eat it too by maintaining our
         | code with the simple path available to fall back to if we get a
         | complicated new requirement. Need to add in complex cases that
         | weren't thought of before? Add them to new switch cases that
         | are slower, and then keep looking for more performant ways of
         | calculating things of need be.
        
           | TeMPOraL wrote:
           | > _You're right that once you've done the 15x performance
           | gain that Casey demonstrates, the code is pretty brittle and
           | prone to maintenance problems if the requirements change a
           | lot._
           | 
           | In some sense, it is a _good_ thing. It creates natural
           | backpressure to scope creep.
           | 
           | The "Clean Code" developer can add a new complex shape type
           | to their program in 15 minutes; just subclass here, implement
           | the methods, done, no changes to other code. No performance
           | impact - the program remains as badly performant as it was
           | before.
           | 
           | The Casye-style developer will take 15 minutes and come back
           | to you with:
           | 
           | "Oh, we can make the calculations for this new shape
           | approximate to, idk. +/- 50%, by adding another parameter to
           | the common equation; this will, however, cause 1.1x
           | performance drop across the board. We could make it accurate
           | by special-casing, at the cost of 2-3x perf drop for the
           | whole app. We can also spend a person-week looking into
           | relevant branch of mathematics to see if there isn't a
           | different equation that could handle the new shape accurately
           | with no performance penalty."
           | 
           | "Or, you know, we could just _not do it at all_. _Why
           | exactly_ do we need to support this new shape? Is supporting
           | it worth the performance drop? "
           | 
           | Whichever option the developer and their team chooses, their
           | software will _still_ remain an order of magnitude more
           | performant than the  "Clean Code" style. Which is to say,
           | those devs are aware of the costs - the "Clean Code" style is
           | so ridiculously wasteful, as to not even notice such costs in
           | the first place.
        
       | fredrikholm wrote:
       | I've worked in projects where no one seemed to know SQL, where
       | massive speed improvements were made by fixing very low hanging
       | fruits like removing select * queries, adding naive indexes,
       | removing N+1 queries etc.
       | 
       | Likewise, I've worked in code bases where performance had been
       | dreadful, yet there were no obvious bottlenecks. Little by
       | little, replacing iterators with loops, objects/closures with
       | enum-backed structs/tables, early exits and so on accumulating to
       | the point where speed ups ranged from 2X to 50X without changing
       | algorithms (outside of fixing basic mistakes like not pre
       | allocating vectors).
       | 
       | Always fun to see these videos. I highly recommend his
       | `Performance Aware Programming` course linked in the description.
       | It's concise and to the point, which is a nice break from his
       | more casual videos/streams which tend to be long-winded/ranty.
        
         | dgb23 wrote:
         | I can very much relate to this.
         | 
         | Just taking the little bit of time to think about what the
         | computer needs to do and making a reasonable effort to not do
         | unnecessary stuff goes a long way. That 2x-50x factor is in
         | fact very familiar. That's something loading in a second rather
         | than in a minute, or something feeling snappy instead of
         | slightly laggy.
         | 
         | And it matters much more than people say it does. The
         | "premature optimisation..." quote has been grossly misused to a
         | degree that it's almost comical. It's not a good excuse for
         | being careless.
        
           | jay_kyburz wrote:
           | One thing I find frustrating in game development, is we often
           | put off optimization until the end of the project when we
           | know we are happy with the game. But that means _I_ have to
           | live with a slow code base for 2 years.
           | 
           | It takes 19 seconds for the main menu to load when you push
           | play on our current game in Unity. It's killing me.
           | 
           | Meanwhile in my lua side project, its less than a second.
        
         | pocket_cheese wrote:
         | Could you elaborate what you mean by "naive indexes"?
        
           | dgb23 wrote:
           | Not GP. But you can often anticipate where indexes have to go
           | for common queries.
           | 
           | There might be some important colums like say a "status" or a
           | "date", which are fundamental to a lot of queries.
           | 
           | Or you have colums X and Y being used frequently or
           | importantly in where clauses together, then that's a
           | candidate for composite indexes.
           | 
           | Stuff like that.
        
           | fabrice_d wrote:
           | In general that means looking at the sql query plan for a
           | slow query, and adding appropriate indexes when there are
           | full table scans.
        
           | ocimbote wrote:
           | I assume "naive" means "simple, basic" here. As in "index
           | this column. Period".
        
       | [deleted]
        
       | tialaramex wrote:
       | Malicious compliance for C++ programmers. This is the person who
       | thinks they're clever for breaking stuff because "You didn't say
       | not to". Managing them out of your team is likely to be the
       | biggest productivity boost you can achieve.
       | 
       | In the process of "improving" the performance of their arbitrary
       | benchmark they make the system into an unmaintainable mess. They
       | can persuade themselves it's still fine because this is only a
       | toy example, but notice how e.g. squares grow a distinct height
       | and width early in this work which could get out of sync even
       | though that's not what a "square" is? What's that for? It made it
       | easier to write their messy "more performant" code.
       | 
       | But they're not done, when they "imagine" that somehow the
       | program now needs to add exactly a feature which they can
       | implement easily with their spaghetti, they present it as "giving
       | the benefit of the doubt" to call two virtual functions via
       | multiple indirection but in fact they've _made performance
       | substantially worse_ compared to the single case that clean code
       | would actually insist on here.
       | 
       | There are two options here, one is this person hasn't the
       | faintest idea what they're doing, don't let them anywhere near
       | anything performance sensitive, or - perhaps worse - they know
       | exactly what they're doing and they intentionally made this
       | worse, in which case that advice should be even stronger.
       | 
       | Since we're talking about clean code here, a more useful example
       | would be what happens if I add two more shapes, let's say
       | "Lozenge w/ adjustable curve radius" and "Hollow Box" ? Alas, the
       | tables are now completely useless, so the "performant" code needs
       | to be substantially rewritten, but the original Clean style suits
       | such a change just fine, demonstrating why this style exists.
       | 
       | Most of us work in an environment where surprising - even
       | astonishing - customer requirements are often discovered during
       | development and maintenance. All those "Myths programmers believe
       | about..." lists are going to hit you sooner or later. As a result
       | it's very difficult to design software in a way that can
       | accommodate new information rather than needing a rewrite, and
       | yet since developing software is so expensive that's a necessary
       | goal. Clean coding reduces the chance that when you say "Customer
       | said this is exactly what they want, except they need a Lozenge"
       | the engineers start weeping because they've never imagined the
       | shape might be a lozenge and so they hard coded this "it's just a
       | table" philosophy and now much of the software must be rewritten.
       | 
       | Ultimately, rather than "Write code in this style I like, I
       | promise it will go fast" which is what you see here, and from
       | numerous other practitioners in this space, focus more on data
       | structures and algorithms. You can throw away a _lot_ more than a
       | factor of twenty performance from having code that ends up N^3
       | when it only needed to be N log N or that ends up cache thrashing
       | when it needn 't.
       | 
       | One good thing in this video: They do at least measure. Measure
       | three times, mark twice, cut only once. The engineering effort to
       | actually make the cut is considerable, don't waste that effort by
       | guessing what needs changing, measure.
        
       | kingcai wrote:
       | I like this post a lot, even if it's a somewhat contrived
       | example. In particular I like his point about switch statements
       | making it easier to pulled out shared logic vs. polymorphic code.
       | 
       | There's so much emphasis on writing "clean" code (rightly so)
       | that it's nice to hear an opposing viewpoint. I think it's a good
       | reminder to not be dogmatic and that there are many ways to solve
       | a problem, each with their own pros/cons. It's our job to find
       | the best way.
        
       | Semaphor wrote:
       | Here is the link as an article for people like me who don't like
       | watching videos: https://www.computerenhance.com/p/clean-code-
       | horrible-perfor...
        
       | flippinburgers wrote:
       | "Clean code", "Clean architecture", "Clean etc", they are all
       | totally grotesque and the sign of incompetence.
        
       | ctx_matters wrote:
       | As always in programming discussions - context matters.
       | 
       | https://trolololo.xyz/programming-discussions
        
       | davnicwil wrote:
       | I don't think there is a contradiction or surprising point here.
       | 
       | At least my understanding of the case for clean code is that
       | developer time is a significantly more expensive resource than
       | compute, therefore write code in a way which optimises for
       | developers understanding and changing it, even at the expense of
       | making it slower to run (within sensible limits etc etc).
        
         | AnIdiotOnTheNet wrote:
         | I think this is hilarious when talking about someone like
         | Muratori, who has had his hands in more games' code than most
         | people on this site have ever even played. He is incredibly
         | productive by comparison to all the people making excuses why
         | their programs are slow bloated garbage fires.
        
         | kps wrote:
         | The context for the author is game development. A game can't
         | just spin up a few more servers in the cloud. The slower the
         | code, the smaller the customer base. Some fields have more
         | severe constraints, where developer time may be expensive, but
         | compute is priceless.
         | 
         | (This is not to endorse writing code that is harder than
         | necessary to understand, or failing to document the parts that
         | are necessarily hard.)
        
         | pkolaczk wrote:
         | > developer time is a significantly more expensive resource
         | than compute
         | 
         | Depends on the number of invocations of the program.
        
           | MonkeyClub wrote:
           | Not just that.
           | 
           | The "developer time is valuable" mantra is thrown left and
           | right, disregarding how much of that valuable resource will
           | be wasted down the line due to bad implementations.
           | 
           | If we optimize for developer time, let's optimize across the
           | software's entire lifecycle, not just that first push of a
           | MVP to production.
        
             | cinntaile wrote:
             | This is a good point but it gets complicated as you often
             | don't know the software's entire lifecycle so you have to
             | optimize for something slightly different.
        
               | MonkeyClub wrote:
               | Yep, I think you're right.
               | 
               | We just need to do away with shoddy first
               | implementations, while avoiding premature optimization
               | (or pointless perfectionism).
               | 
               | Which is all good in theory. In practice, management
               | needs to also acquiesce and stop with the impossible
               | deadlines.
               | 
               | I mean, I've worked on more than one projects that were
               | sold to clients before they were implemented, and I think
               | I'm not unique in that :)
        
         | zaphodias wrote:
         | Yeah you're right, nobody ever said to me "use interfaces
         | because the code it's faster".
         | 
         | On the other hand, when I studied dynamic dispatch and stuff
         | like that, I don't think enough people told me "when you do
         | that, you are making _this_ tradeoff ".
         | 
         | I feel like it's worth sharing this kind of knowledge in order
         | to make better informed decisions (possibly based on numbers).
         | There's no need to become an extremist in either direction.
        
           | jesstaa wrote:
           | The trade off has changed over time as memory access has
           | become a bigger and bigger bottleneck. But caring about this
           | is still "premature optimisation".
           | 
           | The claim of 20x program performance difference is overblown.
           | Compilers can often remove virtual function calls, JITs can
           | also do it at runtime. Virtual function calls in a tight loop
           | are slow but most of your program isn't in a tight loop and
           | few programs have compute as a bottleneck.
           | 
           | Measure your program, find the tight loops in your program
           | and optimise that small part of your program.
        
         | goodlinks wrote:
         | Externalised cost on the environment will hopefully one day be
         | addressed.
        
           | TeMPOraL wrote:
           | Not to mention the users. For all the bullshit the marketing
           | departments of every company spew about valuing their
           | customers, software companies don't really give a damn about
           | the many person-hours of users' lives they waste to save a
           | person-minute of dev time.
        
             | davnicwil wrote:
             | Everything is a balance though, and the tradeoff may not be
             | linear. Where they've landed on it may be the only way to
             | get the software to customers _at all_ in a cost effective
             | manner.
        
       | scotty79 wrote:
       | Personally I think we'd be in a better spot today if instead of
       | class hierarchies and polymorphism the thing that would go
       | mainstream was Entity-Component-System approach with composition
       | instead of inheritance.
        
       | datadeft wrote:
       | TL;DR:
       | 
       | "Game developer optimizes code for execution as opposed to
       | readability that 'clean-code' people suggest".
       | 
       | There are few considerations:
       | 
       | - most code is not CPU bound so his claims that you are eroding
       | progress because you are not optimizing for CPU efficiency is
       | baseless
       | 
       | - writing readable code is more important than writing super
       | optimal code (few exceptions: gaming is one)
       | 
       | - using enums vs OOP is not changing the readability at least to
       | me
       | 
       | I think we can have fast and readable code without following the
       | 'clean-code' principles and at the end it does not matter how
       | much gain we have CPU cycle-wise.
        
         | ruicaridade wrote:
         | I'm tired of every tool I install on my latest gen Intel CPU +
         | 32GB RAM + NVMe drive machine being a complete slog.
         | 
         | To each their own, but I don't find Casey's performant version
         | less readable, I don't see the need for so many abstractions.
        
           | randomdata wrote:
           | "CLEAN" doesn't care so much about readability, but rather
           | testability. The video would have been far more compelling if
           | Casey had spent more time showing how he would test his
           | application without the test surface area blowing up
           | exponentially as the feature space expands.
        
           | sebstefan wrote:
           | >I don't find Casey's performant version less readable
           | 
           | It does create implicit coupling. If you try to add a new
           | shape you will run into the problem.
           | 
           | In the clean code version, your compiler will remind you to
           | implement calculateArea
           | 
           | With his version you have to add a new `case` to every switch
           | statement and hope you didn't miss one with a default case,
           | because the compiler won't catch this one.
           | 
           | It's a crap way to code
        
             | Tenticles wrote:
             | people using clean code ideologies are being prematurely
             | pessimistic and assuming they know much more about a
             | problem than they actually do when they use these clean
             | code techniques. "I don't know how many shapes I've been
             | asked to do, so I'll assume the worst case scenario and
             | make the code slower and harder than the simple naive
             | solution that would be hard to read(debatable) if we had
             | one million shapes" is a terrible argument and it is why
             | everything goes slow.
             | 
             | The correct way to deal with this, is refactoring to a more
             | maintainable code once you know the amount of shapes will
             | wildly change, As soon as we get too many shapes as the
             | problem has changed. You can only pretend to know what is
             | the best architecture for a problem when you have dealt
             | with it several times.
             | 
             | Clean code apologists pretend their single time dealing
             | with website backend is proof enough that clean code works
             | and that it works for every problem and that it has to be
             | the default approach and is the most readable for most
             | problems. It is a total insanity for something that can't
             | be measured with any tool.
             | 
             | Edit:
             | 
             | I fully understand that "premature optimization is wrong"
             | but using these "guidelines" is premature optimization of
             | scalability and maintainability. Somehow when the
             | "premature optimization" is about things you people want
             | that's somehow okay? pff
             | 
             | Also, I don't find clean code readable, it looks like
             | complex, un-refactorable garbage to me 9 out of 10 times.
             | No wonder why people are so fucking scared of rewriting a
             | class and act is if it will take months to do so, this
             | ideology makes impossible to actually play around with your
             | code, you can't neither make it more readable or more
             | performant, you are locked in with a sluggish collection of
             | dozens of files even for the simplest of problems.
        
               | sebstefan wrote:
               | It doesn't have to be slow. There's a reason "Clean code"
               | is being criticized everytime it's mentioned. It touts
               | inheritance and polymorphism as a solution to everything
               | like it's 2002. There's been enough "Inheritance
               | considered harmful" articles to toss that aside
               | 
               | The take on switch statements is covered in "The
               | Pragmatic programmer" as well, which coincidentally is
               | much less criticized when it comes to books about clean
               | code.
               | 
               | The way to fix the switch is getting rid of the class
               | "shape" and making it an interface, then implementing the
               | interface in each shape, as a non-virtual method. And
               | then you don't let people inherit. They can compose
               | instead.
               | 
               | Performance is unaffected, you get rid of the switch, the
               | compiler catches your mistakes for you, and everyone's
               | happy
        
               | mrkeen wrote:
               | > people using clean code ideologies are being
               | prematurely pessimistic and assuming they know much more
               | about a problem than they actually do when they use these
               | clean code techniques.
               | 
               | No. The techniques are there to let you change your
               | codebase as you learn more about the problem.
        
           | smcl wrote:
           | I think if you dig into why any of these tools perform
           | poorly, you probably won't find it's because they were
           | implemented using "Clean Code", but rather that it's down to
           | a combination of many different things. I can't say anything
           | concrete because I don't know which applications you are
           | referring to.
           | 
           | But IMO framing it as clean vs performant is a mistake
        
       | thom wrote:
       | There is no doubting Casey's chops when he talks about
       | performance, but as someone who has spent many hours watching
       | (and enjoying!) his videos, as he stares puzzled at compiler
       | errors, scrolls up and down endlessly at code he no longer
       | remembers writing, and then - when it finally does compile -
       | immediately has to dig into the debugger to work out something
       | else that's gone wrong, I suspect the real answer to programmer
       | happiness is somewhere in the middle.
        
         | rstefanic wrote:
         | When working with a larger code base, there will always be
         | parts that you don't remember writing and you'll inevitably
         | have to read the code to understand it. That's just part of the
         | job/task, regardless of the style it's written in.
        
           | jjeaff wrote:
           | Of course any code requires some refresher at time, but the
           | difficulty and time required to figure it out again is a
           | spectrum that goes all the way down to the seventh circle of
           | hell.
        
           | hinkley wrote:
           | In shared code particularly with a culture of refactoring,
           | there's no guarantee that the function call you see is doing
           | what you remember it doing a year ago.
           | 
           | When I was coming up I got gifted a bunch of modules at
           | several jobs because the original writer couldn't be arsed to
           | keep up with the many incremental changes I'd been making.
           | They had a mentality that code was meant to be memorized
           | instead of explored, and I was just beginning to understand
           | code exploration from a writer's perspective. So they were
           | both on the wrong side of history and the wrong side of me.
           | Fuck it, if you want it so much, kid, it's yours now. Good
           | luck.
        
             | randomdata wrote:
             | _> there 's no guarantee that the function call you see is
             | doing what you remember it doing a year ago._
             | 
             | TDD provides those guarantees. If someone changes the
             | behaviour of the function you will soon know about it.
             | 
             | That's significant because Robert 'Clean' Martin sells
             | clean code as a solution to some of the problems that TDD
             | creates. If you reject TDD, clean code has no relevance to
             | your codebase. As Casey does not seem to practice TDD, it
             | is not clear why he though clean code would apply to his
             | work?
        
               | hinkley wrote:
               | It doesn't. TDD is about writing new code. It doesn't say
               | anything about existing tests being sacrosanct, or
               | pinning tests sticking around forever. I can extract code
               | from a function and write tests for it. I probably know
               | that there's still code that checks for user names but I
               | can't guarantee that this code is being called from
               | function X anymore, or whether it's before or after
               | calling function Y. Those are the sorts of things people
               | try to memorize about code. "What are the knock-on
               | effects of this function" doesn't always survive
               | refactoring. Particularly when the refactoring is because
               | we have a new customer who doesn't want Y to happen at
               | all. So now X->Y is no longer an invariant of the system.
        
               | randomdata wrote:
               | _> TDD is about writing new code._
               | 
               | TDD is about documenting behaviour. Which is why it was
               | later given the name Behaviour Driven Development (BDD),
               | to dispel the myths that it is about testing. It is true
               | that you need to document behaviour before writing code,
               | else how would you know what to write? Even outside of
               | TDD you need to document the behaviour some way before
               | you can know what needs to be written.
               | 
               | A function's behaviour should have no reason to change
               | after its behaviour is documented. You can change the
               | implementation beneath to your hearts content, but the
               | behaviour should be static. If someone attempts to change
               | the behaviour, you are going to know about it. If you are
               | not alerted to this, your infrastructure needs
               | improvement.
        
             | nonethewiser wrote:
             | Why were you making incremental changes?
        
               | hinkley wrote:
               | "Make the change easy, then make the easy change" hadn't
               | even been coined as a phrase yet when I discovered the
               | utility of that behavior. When I read 'Refactoring
               | (Fowler)' it was more like psychotherapy than a roadmap
               | to better software. "So that's why I am like this."
               | 
               | When we get unstuck on a problem it's usually due to
               | finding a new perspective. Sometimes those come as
               | epiphanies, but while miracles do happen, planning on
               | them leads to disappointment. Sometimes you just have to
               | do the work. Finding new perspectives 'the hard way'
               | involves looking at the problem from different angles,
               | and if explaining it to someone else doesn't work, then
               | often enough just organizing a block of code will help
               | you stumble on that new perspective. And if that also
               | fails, at least the code is in better shape now.
               | 
               | Not long after I figured out how to articulate that, my
               | writer friend figured out the same thing about creative
               | writing, so I took it as a sign I was on the right track.
               | 
               | I do know that the first time I was doing that, it was
               | for performance reasons. I was on a project that was so
               | slow you could see the pixels painting. My first month on
               | that project I was doing optimizations by saying "1
               | Mississippi" out loud. The second month I used a timer
               | app. I was three months in before I even needed to
               | print(end - start).
        
         | krapp wrote:
         | If you're talking about Handmade Hero, the real answer to
         | programmer happiness is not using a language you despise and
         | refusing to leverage the features of, not refusing to use
         | libraries in that language or frameworks, not re-implementing
         | everything from first principles, and to actually have your
         | game designed first (not designing while you code.)
        
           | andai wrote:
           | Doing things without libraries (except the stuff built into
           | Windows XP?) is the whole point of the project.
        
           | AnIdiotOnTheNet wrote:
           | Casey is a bad example of a game designer and he'll be the
           | first to admit it. However, it is worth noting that Jonathan
           | Blow very much does design while he codes and recommends the
           | practice. He also generally abstains from library
           | dependencies and implements a lot of thing himself.
           | 
           | Of course, part of the point of Handmade Hero is to show that
           | you can totally reimplement everything from first principles.
           | Libraries are not magical black boxes, they're code written
           | by human beings like you or me, and you can understand what
           | they're doing.
           | 
           | For instance, he wrote his own PNG decoder[0] live on stream,
           | with hardly any prior knowledge of the spec, even though I'm
           | confident that under normal circumstances he'd just use
           | stb_image. I'm sure he did this just to show how you'd go
           | about doing that sort of thing.
           | 
           | [0] He only implemented the parts necessary to load a non-
           | progressive 24bit color image, but that still involved
           | writing his own DEFLATE implementation.
        
             | bbatha wrote:
             | Is Blow even a good example to look at? He's released 2
             | games in 18 years which definitely had phenomenal game play
             | but are not technically complex even for the the time.
        
       | Arnavion wrote:
       | "Use subclasses over enums" must be some niche advice. I've never
       | heard it. The youtuber seems to be referring to some specific
       | example (he refers to specific advice from "them") so I guess
       | there's some context in the other videos of the series.
       | 
       | re: the speedup from moving from subclassing to enums - Compiler
       | isn't pulling its weight if it can't devirtualize in such a
       | simple program.
       | 
       | re: the speedup from replacing the enum switch with a lookup
       | table and common subexpression - Compiler isn't pulling its
       | weight if it can't notice common subexpressions.
       | 
       | So both the premise and the results seem unconvincing to me.
       | 
       | Of course, he is the one with numbers and I just have an untested
       | hypothesis, so don't believe me.
        
         | cma wrote:
         | For interactive applications debug builds also need to be fast.
        
         | 10000truths wrote:
         | A compiler can only devirtualize a virtual call if it knows the
         | underlying object's type at compile time, which it certainly
         | won't for a runtime populated array of base class pointers.
         | 
         | CSE doesn't work across function boundaries unless those
         | functions are inlined, which won't happen with virtual
         | functions due to the above.
        
           | Arnavion wrote:
           | See my reply to anonymoushn's comment.
        
         | pulp_user wrote:
         | Saying things akin to "your compiler is bad" because it doesn't
         | optimize stuff like this is a cop out.
         | 
         | For one: Most compilers for most languages are bad by that
         | metric, and interpreters don't even get to play. So this is not
         | helpful for the vast majority of people. Waiting around for
         | them becoming good is not a viable option.
         | 
         | Second, say the compiler would perform good in this scenario.
         | Cool, lets go up a notch, or two, and it would start performing
         | bad again, because there are limits to what it can do in a
         | reasonable amount of time.
         | 
         | And if that limit were big that maybe wouldn't matter, but the
         | limit is low, and so it does. Real programs are so much more
         | complex than this example, that even if the compiler got 10
         | times better, it would still fail to optimize large parts of
         | your real program.
        
         | anonymoushn wrote:
         | I think "them" is someone named Robert Cecil Martin, but I'm
         | not sure if this example from the video appears in his book.
         | 
         | What compiler are you using that devirtualizes every class
         | hierarchy? I suspect that Casey is using C++ so he may
         | (unfortunately) have multiple translation units in his program.
        
           | Arnavion wrote:
           | >I suspect that Casey is using C++ so he may (unfortunately)
           | have multiple translation units in his program.
           | 
           | Yes, the video uses C++.
           | 
           | Obviously if one compiles a library then the compiler has no
           | way of knowing that other subclasses of `shape_base` do not
           | exist. My point is that when compiling a binary as they are
           | doing for their video, the compiler knows that there are no
           | other subclasses that it needs to cater to.
           | 
           | It might require LTO explicitly, of course. At the very least
           | godbolt doesn't devirtualize without LTO [1], but godbolt
           | itself breaks if I enable LTO [2] and I CBA to test locally
           | right now.
           | 
           | [1]: https://gcc.godbolt.org/z/498nKEzhK
           | 
           | [2]: https://gcc.godbolt.org/z/WqhP3fzxK
        
             | muricula wrote:
             | To fix 2, add '-fno-rtti' to the args:
             | https://gcc.godbolt.org/z/zMdfx71qE Unfortunately, gcc does
             | not seem to devirtualize in your example
        
             | Tomis02 wrote:
             | Correct me if I'm wrong. Even if the compiler devirtualizes
             | the classes, you still have the memory cost of storing the
             | vtable pointer in each of the object instances (8 bytes for
             | each instance), which means you need to do more fetches
             | from memory. Does CPU prefetching negate the cost of these
             | additional memory lookups?
        
               | muricula wrote:
               | The enum to tell what shape an object is costs something,
               | probably 4 bytes. Storing the 8 byte vtable constant
               | isn't so bad considering.
        
           | signaru wrote:
           | Martin Fowler's "Refactoring" also has "Replace Conditional
           | with Polymorphism". I like the intent of the book but I don't
           | follow it 100%. Another part of that book that tripped me is
           | where he calls the same function with the same argument
           | multiple times instead of saving the result in a variable for
           | reuse.
        
             | jameshart wrote:
             | The existence of a refactoring in the _Fowler_ refactoring
             | catalog is not normative advice that it should always be
             | applied. Refactoring also has "Extract Function" and
             | "Inline Function" as refactorings. They can't both be
             | right...
             | 
             | Refactorings are moves you can make. Choosing when to make
             | them is up to you. In fact, _Fowler_ provides guidance
             | along with each refactoring suggesting when it might be
             | applicable (i.e., not always)
        
       | lumb63 wrote:
       | I don't understand why there is still the false dichotomy between
       | performance and speed of development/readability. Arguments on HN
       | and in other software circles suggest performant code cannot be
       | well organized, and that well organized code cannot be
       | performant. That's false.
       | 
       | In my experience, writing the code with readability, ease of
       | maintenance, and performance all in mind gets you 90% of each of
       | the benefits you'd have gotten focusing on only one of the above.
       | For instance, maybe instead of pretending that an O(n^2)
       | algorithm is any "cleaner" than an O(n log n) algorithm because
       | it was easier for you to write, maybe just use the better
       | algorithm. Or, instead of pretending Python is more readable or
       | easier to develop in than Rust (assuming developers are skilled
       | in both), just write it in Rust. Or, instead of pretending that
       | you had to write raw assembly to eke out the last drop of
       | performance in your function, maybe target the giant mess
       | elsewhere in your application where 80% of the time is spent.
       | 
       | A lot of the "clean" vs "fast" argument is, as I've said above,
       | pretending. People on both sides pretend you cannot have both,
       | ever, when in actuality you can have almost all of what is
       | desired in 95% of cases.
        
       | winkelwagen wrote:
       | This is the first video I've seen by him. I'm by no means a fan
       | of clean code. But I think he's making a fool of himself here.
       | Picking out 1 code example from the book doesn't proof that much
       | on its own. This stuff is so language, os, hardware and compiler
       | specific anyway.
       | 
       | The iPhone comparisons are extremely cringe. Real application do
       | so much more then this contrived example. Something that feels
       | fast isn't the same thing as something is fast.
       | 
       | Would I advise beginner programmer's to read this book? Sure, let
       | them think about ways to structure code.
       | 
       | If he just had concluded with, that it is important to optimize
       | for the right thing that would be fine. But he seems more
       | interested in picking a fight with clean code.
       | 
       | And yes performance is a lost art in programming
        
         | jackmott wrote:
         | [dead]
        
         | randomdata wrote:
         | _> But he seems more interested in picking a fight with clean
         | code._
         | 
         | Or, more likely, a straw man.
         | 
         | "Clean" exists to provide some solutions to certain problems in
         | TDD. Namely how to separate your logic so that units can be
         | reasonably put under test without an exploding test surface and
         | to address environments which are prohibitively recreated. If
         | you don't practice TDD, "clean" isn't terribly relevant. As far
         | as I am aware, it has always been understood that hard-to-test
         | code has always had some potential to be more efficient, both
         | computationally and with respect to sheer programmer output,
         | but with the tradeoff that it is much harder to test.
         | 
         | It is useful to challenge existing ideas, but he didn't even
         | try to broach the problem "clean" purports to solves. Quite
         | bizarre.
        
       | nimih wrote:
       | > Our job is to write programs that run well on the hardware that
       | we are given.
       | 
       | The author seems to be neglecting the fact that the whole point
       | of "clean code" is to improve the likelihood of achieving the
       | first goal (code that runs well, i.e. correctly) across
       | months/years of changing requirements and new maintainers. No one
       | (that I've ever spoken to or worked with, at least) is under any
       | illusions that you can almost always trade off maintainability
       | for performance.
       | 
       | Admittedly, I think a lot of prescriptions that get made in
       | service of "clean code" are silly or counterproductive, and
       | people who obsess over it as an end unto itself can sometimes be
       | tedious and annoying, but this article is written in such
       | incredible bad faith that its impossible to take seriously.
        
       | vrnvu wrote:
       | The problem with the contemporary "clean code" concept is that
       | the narrative that performance and efficiency don't matter has
       | been pushed down the throat of all programmers.
       | 
       | Re-usability, OOP concepts or pure functional style, design
       | patterns, TDD or XP methodologies are the only things that
       | matter... And if you use them you will write "clean code". Even
       | worse, the more concepts and abstractions you apply to your code
       | the better programmer you are!
       | 
       | If you look at the history of programming and classic texts like
       | "the art of programming", "sicp", "the elements of
       | programming"... The concept of "beautiful code" appears a lot.
       | This is an idea that has always existed in our culture. The main
       | difference with the "clean code" cult is that "beautiful code"
       | also used to mean fast and efficient code, efficient algorithms,
       | low memory footprint... On top of the "clean code" concepts of
       | easy to test and re-usable code, modularity... etc
        
         | dvaletin wrote:
         | > Even worse, the more concepts and abstractions you apply to
         | your code the better programmer you are!
         | 
         | I had an Android programmer, who was eager to write clean code
         | following GOF patterns, OP and the rest of the fancy things
         | senior developers usually do. Ended up Android team with 3 devs
         | required 3x time to develop same feature compared to single iOS
         | engineer.
        
           | KyeRussell wrote:
           | Well they weren't a good senior developer then. Part of the
           | art is knowing when to use the patterns. An incredibly im
           | protest differentiation as it's so so easy for someone a bit
           | green behind the ears to see this and think that any sort of
           | architectural thinking is useless.
        
             | dvaletin wrote:
             | > Well they weren't a good senior developer then. He
             | wasn't. He was a mid grade dev, but it does not important,
             | because even sr devs can fall in love with
             | overcomplications.
        
               | heisenbit wrote:
               | Reminds me of Go. At the beginning you make only simple
               | moves. As one learns the game more complicated patterns
               | emerge. Watching master games the lines again are simple
               | and clear - in hindsight.
        
           | DaiPlusPlus wrote:
           | It's often said that Design Patterns are workarounds for
           | limitations in the expressiveness in a language: for example,
           | the Singleton pattern is only useful in languages where you
           | can't pass a reference to an interface implemented entirely
           | by static methods - or the Visitor Pattern is the workaround
           | for a language not supporting Double-Dispatch - method call
           | chaining is a workaround for not having a pipe operator, and
           | so on.
           | 
           | You said you're targeting Android, that implies you were
           | using Java, which has its reputation for both a rigidly
           | inflexible language-design team _and_ its ecosystem having
           | more design-patterns than a set of fabrics swatches - that 's
           | not a coincidence.
           | 
           | But for iOS, they'd be using Swift, right? Swift's designers
           | clearly decided they didn't want to be like Java: take the
           | best bits of C# and other well-designed languages and don't
           | be afraid to iterate on the design, even if it means
           | introducing breaking-changes - but the result is a highly-
           | expressive language that, as you've demonstrated, allows just
           | 1 Swift person to do the equivalent of 3 Java people. Swift
           | is an _actual_ pleasure to use, but using Java today makes me
           | weary.
           | 
           | (To be clear: Java was a fantastic language when it was
           | introduced, but it simply hasn't kept-up with the times to
           | its own detriment, it feels like its falling behind more-and-
           | more at time goes on - but that's going to be the fate of
           | every programming language eventually, imo).
        
         | pinto_graveyard wrote:
         | At least in the majority of places I worked at, people cared
         | about readability and maintainability rather than just some
         | abstract notion of "clean code".
         | 
         | And perhaps I was lucky, but typically readability and
         | maintainability are orthogonal to performance and efficiency.
         | Sometimes readable will have optimal performance, sometimes
         | not. Then it becomes a matter of tradeoffs.
        
           | Nocturium wrote:
           | Same here. We had coding _guidelines_ , not _rules_.
           | 
           | The first entry even stated that they were guidelines and if
           | you have a valid reason to deviate: discuss it and you'll get
           | an exception.
           | 
           | The discussion thing was mainly for new coders. We had a
           | library part of the code that was used by many programs.
           | Optimizing parts of it for their use case, could make it
           | unusable for the others who used it.
           | 
           | Communication is key. If you discuss, before implementing it,
           | why you're making certain design decisions then everything
           | goes a lot smoother. If there are objections, keep in mind
           | that the worst case isn't throwing away your design and
           | starting over. The worst case is implementing it and screwing
           | over your fellow coders.
        
         | lwhi wrote:
         | If you have more than one person working on a codebase; clean
         | code matters a lot.
         | 
         | A code base that can't be understood and maintained by the
         | whole team, will degrade quickly.
        
           | Tomis02 wrote:
           | > If you have more than one person working on a codebase;
           | clean code matters a lot.
           | 
           | You can have fast and clean code, just not the Uncle Bob
           | style of "clean code". Uncle Bob hijacked the meaning of
           | cleanliness. It doesn't mean that code written like that is
           | actually clean, in fact it's usually the opposite: Uncle
           | Bob's clean code is NOT clean.
        
             | marginalia_nu wrote:
             | It's also worth noting that Bob's advice is for Java, which
             | has relatively unusual performance-idiosyncrasies around
             | polymorphism and function calls. Much of the advice becomes
             | very poor in languages that aren't JVM-based.
        
           | krona wrote:
           | Nobody is saying maintainable code isn't important but rather
           | that clean vs fast is a false dichotomy.
        
             | lwhi wrote:
             | Okay, so we should aim for clean AND fast code?
        
               | gjadi wrote:
               | One can argue that simple code is both fast and clean.
               | However, you can only measure how fast (or slow) your
               | code is, so make it simple, aim for fast and hope it's
               | clean (:
        
               | Ygg2 wrote:
               | One can argue that extremes of fast and clean code will
               | both result in something horrific.
               | 
               | What Casey is doing is showing how bad a hammer is at
               | removing screws. I mean duh, you're removing screws with
               | a hammer.
        
               | gjadi wrote:
               | I don't understand what you are saying.
               | 
               | My comment was a jest (as suggested by the smiley).
               | 
               | What I take from Casey's post is that a simple non-
               | pessimistic representation allows for efficient code.
               | That is, using a table instead of a class hierarchy gives
               | massive performance boost. Compared to a "clever" loop
               | unrolling doesn't give that much of a boost.
               | 
               | So we need simpler representation. IMHO the table
               | implementation is not less readable nor less flexible
               | than the class hierarchy. But it is less common in the
               | code I am used to, in other words, it's not a widely used
               | pattern).
        
               | TeMPOraL wrote:
               | > _a simple non-pessimistic representation_
               | 
               | That's the insight, I think. "Clean Code" tells you to
               | use maximally pessimistic representation for everything,
               | because everything could be extended in some way in every
               | direction. Meanwhile, in the real world, you likely have
               | a good idea what directions of evolution are possible,
               | and which of them are even useful.
               | 
               | Casey's example shows you that, if you design your code
               | to make use of those assumptions, you'll get absurd
               | performance benefits for little to none loss in
               | readability (and perhaps even a gain!).
               | 
               | Some may ask, "what if you're wrong with your
               | assumptions?". Well, you pay a price then. Worst case,
               | you may need to rip out a module, rethink the theory
               | behind it, and rewrite it from scratch - likely forgoing
               | some of the performance benefits, too. Usually, the price
               | will be much smaller. Either way, it's still better than
               | being maximally pessimistic from the start, and writing
               | software that never had a chance of ever becoming good or
               | fast.
        
             | Ygg2 wrote:
             | There are competeing in my opinion. Many techniques to make
             | code fast will make it less readable.
             | 
             | Techniques like manual code unrolling/inlining, writing
             | branchless code, compressing data to fit into a pointer,
             | etc.
        
               | lwhi wrote:
               | I agree .. fast doesn't take account of how verbose or
               | understandable code will be.
               | 
               | As an extreme example, people choose not to write low
               | level code for a good reason, even if it might be fast.
               | 
               | High level / interpreted code will be more
               | understandable, but will automatically have an overhead
               | in most cases.
        
         | frankreyes wrote:
         | > the narrative that performance and efficiency don't matter
         | 
         | That was the case during the 90s and the first decade of the
         | 2000. Just wait for Moore's Law to kick in and in 18 months
         | your code will get faster by an order of magnitude for free.
        
           | DaiPlusPlus wrote:
           | Moore's Law concerns IC transistor counts, not actual overall
           | performance, and especially not single-threaded performance:
           | a 40-core CPU isn't going to make Windows twice as fast as a
           | 20-core CPU.
           | 
           | Single-threaded performance has long-since effectively
           | plateaued: it's 2023 now and a desktop computer built 10
           | years ago (2013) can run Windows 11 just fine (ignoring the
           | TPM thing) - but compare that to using a computer from 2003
           | in 2013 (where it'd run, but poorly), or a computer from 1993
           | in 2003 (which simply wouldn't work at all).
           | 
           | This is not to say that there won't be any significant
           | performance gains to come, such as with rethinks in hardware
           | (e.g. adding actual RAM into a desktop CPU package, non-
           | volatile memory, etc) but I struggle to see how typical x86
           | MOV,CMP,JMP instructions could be executed sequentially any
           | faster than they are right now.
        
         | mpweiher wrote:
         | > TDD or XP methodologies
         | 
         | > the more concepts and abstractions you apply to your code the
         | better programmer you are!
         | 
         | These contradict each other. XP very explicitly opposes
         | introducing (unnecessary) abstractions: YAGNI, DTSTTCPW, etc.
         | And TDD is a good tool for enforcing that, as you only get to
         | write code that you have a failing test case for.
        
           | vrnvu wrote:
           | > TDD is a good tool for enforcing that, as you only get to
           | write code that you have a failing test case for.
           | 
           | TDD encourages the use of mocks and unit testing to increase
           | code coverage. And unit testing is specially dangerous. You
           | write a test, then program, so the test is helping you (the
           | programmer). Selling the idea that the higher the test code
           | coverage is the better and safer your code is. Not true at
           | all. If your code doesn't have integration tests for example,
           | you will never know how it actually runs. If you mock
           | everything, you are not really "testing" anything but your
           | internal logic. Unit testing and code coverage just checks
           | that a code path has been run. But there are other tools like
           | fuzzy testing or mutation testing... Do you randomize the
           | memory at every test run? Do you make sure that the CPU cache
           | is cold or hot depending on the test? Good testing is hard.
           | 
           | Most unit tests are written to ease the development. After
           | finishing the development, they are safe to delete. Because
           | they don't add any real value as I understand it. I
           | understand that a test is a business contract of something
           | that MUST work in a certain way. Unless the contract changes,
           | the test must never be removed or changed. TDD and exhaustive
           | unit testing make the maintenance process harder because you
           | don't know if a test is useful or not.
           | 
           | If you follow TDD, most unit tests are re-written all the
           | time. Because they were not written to test a business or
           | critical contract, they were originally written to help some
           | programmer write some internal logic.
        
             | randomdata wrote:
             | _> TDD and exhaustive unit testing make the maintenance
             | process harder because you don 't know if a test is useful
             | or not._
             | 
             | TDD was later given the name BDD (Behaviour Driven
             | Development) to emphasize that you are not testing, but
             | actually _documenting behaviour_. What kind of behaviour
             | are you documenting that isn 't useful and why did you find
             | it necessary to document in the first place?
             | 
             |  _> If you follow TDD, most unit tests are re-written all
             | the time._
             | 
             | What for? If changing requirements see that your behaviour
             | has changed to the extent that that your unit does
             | something completely different, it's something brand new
             | and should be treated as such. Barring exceptional
             | circumstances, public interfaces should be considered
             | stable for their entire lifetime and, at most, deprecated
             | if they no longer serve a purpose.
             | 
             | The implementation beneath the interface may change over
             | time, but TDD is explicit that you should not test
             | implementation - it is not about testing - only that you
             | should document the expected behaviour of any
             | implementation that may carry out your desired behaviour.
        
             | P_I_Staker wrote:
             | > Selling the idea that the higher the test code coverage
             | is the better and safer your code is.
             | 
             | I don't think someone that believes this has a good
             | understanding of unit testing. You can easily get 100%
             | coverage without testing anything at all!
             | 
             | Coverage is a great metric if it's predicated on high
             | quality tests. Even then 100% coverage doesn't equal
             | "safe". It means that a lot of effort has been put into
             | understanding and testing internal behavior.
             | 
             | You still need higher order tests, arguably even more.
        
             | pinto_graveyard wrote:
             | > If you mock everything, you are not really "testing"
             | anything but your internal logic.
             | 
             | That's the purpose of unit tests. They do not exclude the
             | need to perform other kinds of test. Integration tests,
             | contract tests, stress tests - all those will focus on
             | different facets of a system.
             | 
             | > Most unit tests are written to ease the development.
             | After finishing the development, they are safe to delete.
             | 
             | This is especially bad advice, unless no one will never
             | touch that codebase ever again.
             | 
             | I saw old unit tests highlight bugs that would have been
             | introduced by new code many times over the years.
             | 
             | > TDD and exhaustive unit testing make the maintenance
             | process harder because you don't know if a test is useful
             | or not.
             | 
             | Then, as a developer, remove unit tests that became
             | useless.
             | 
             | Code coverage is a measurement. If you turn it into a goal,
             | it will become useless. If you have "useless" unit tests,
             | it tells me that some unit tests were written as padding to
             | move code coverage up.
        
             | mpweiher wrote:
             | > TDD encourages the use of mocks and unit testing to
             | increase code coverage.
             | 
             | No, it encourages reasonable decoupling, i.e. good design.
             | 
             | If you see yourself introducing mocks (I think you mean
             | stubs, mocks are something more specific) everywhere, you
             | are feeling the pressure, but avoiding the good design.
             | 
             | https://blog.metaobject.com/2014/05/why-i-don-mock.html
             | 
             | > Most unit tests are written to ease the development.
             | 
             | Yes, unit tests help significantly in development.
             | 
             | > After finishing the development, they are safe to delete.
             | 
             | Noooooooooooooooooooooooooooooooooooooooooooooooo!!!!!!!!!!
             | !!!!!!!!!!!!!!!!!!!!!!
             | 
             | They are your guardrails against regressions.
             | 
             | > If you follow TDD, most unit tests are re-written all the
             | time.
             | 
             | Nope.
        
             | rhdunn wrote:
             | I generally don't mock in my TDD-style approach to writing
             | tests. If you have a dependent class, e.g. writing nested
             | serialization logic for JSON objects, you shouldn't mock
             | the inner classes or the JSON serialization classes.
             | 
             | Well written unit tests serve to provide regression testing
             | to prevent bugs reoccurring and to keep existing
             | functionality (e.g. support for reading existing data)
             | working.
             | 
             | TDD is used as a way to help write tests for the API
             | surface and usage of your classes, functions, etc.. You
             | should generally avoid testing internal state as that can
             | change.
             | 
             | For example, if you are writing a set class, the logical
             | place to start is with an empty set -- that's because it is
             | easy to define the empty logic, defining accessor
             | functions/properties like isEmpty, size, and contains. The
             | next logical step is adding elements (two tests: add a
             | single element, add multiple elements). Etc.
             | 
             | Later on, you can change the internal logic of the set from
             | e.g. an array to a hash map. You will keep your existing
             | tests as they document and test your API contract and
             | external semantics. Likewise, if your hash set uses another
             | class like an array, or a custom structure like a red-black
             | tree, you shouldn't mock that class.
        
         | tippytippytango wrote:
         | Sounds like clean code used to mean things that were
         | measurable.
        
         | foul wrote:
         | Strangely enough, even in this video, everyone notice how
         | you're taken to make tradeoffs between coherence and
         | cleanliness vs performance-oriented code in langs like C++,
         | while the Clean Code book was exemplified in Java (so most of
         | performance is shoved in JVM code) and SICP was in Scheme
         | (which is interpreted, or transpiled to C where you can
         | optimize, or has a VM/JIT underneath).
         | 
         | I vaguely smell the language of choice has something to do with
         | that, and C++ would benefit more from data-oriented design than
         | from literate OOP or functional programming patterns took from
         | very very different programming ethoses (the programming
         | language is an interface to something else)
        
         | caeril wrote:
         | One of the most insane things I hear repeated constantly is
         | that "servers are cheaper than programmers", implying that
         | runtime efficiency doesn't matter, only developer efficiency.
         | 
         | Which is all well and good, until you need to hire all the
         | network engineers, systems administrators, devops people,
         | security staff, datacenter operations managers, database
         | sharding engineers, etc to manage the 10x more hardware and
         | network surface area you have to throw at your slow codebase.
        
           | vntok wrote:
           | Go full Cloud and you won't need to hire most of these roles
           | ever, except in niche ultra-high performance cases. These
           | roles will be externalized at AWS or Azure, super-paid to
           | work for you 24/7.
        
       | jupp0r wrote:
       | There's a tradeoff. Engineering time is expensive. Machine time
       | can be expensive too. We need to optimize these costs by making
       | most code that's not performance relevant easy to read and then
       | optimize performance critical code paths while hiding
       | optimization complexity behind abstractions. Either extreme is
       | not helpful as a blanket method.
        
         | scotty79 wrote:
         | User time is expensive too.
        
       | glintik wrote:
       | Most horrible mistake is not in the list: immutability.
        
       | xupybd wrote:
       | >It simply cannot be the case that we're willing to give up a
       | decade or more of hardware performance just to make programmers'
       | lives a little bit easier. Our job is to write programs that run
       | well on the hardware that we are given. If this is how bad these
       | rules cause software to perform, they simply aren't acceptable.
       | 
       | That is not our job! Our job is to solve business problems within
       | the constraints we are given. No one cares how well it runs on
       | the hardware we're given. They care if it solves the business
       | problem. Look at Bitcoin, it burns hardware time as a proof of
       | work. That solves a business problem.
       | 
       | Some programmers work in industries where performance is key but
       | I'd bet not most.
       | 
       | CPU cycles are much cheaper than developer wages.
        
         | dgb23 wrote:
         | Well yes it kind of is? Everyone solves problems. We solve
         | problems with computers.
         | 
         | And we use them because they're autonomous, remember exact
         | details and are very fast and reliable.
         | 
         | There's of course some level of good enough. We don't write ad-
         | hoc scripts in assembly.
         | 
         | But to say dev time is more expensive than computer time only
         | makes sense if programs are actually fast. Fast, reliable
         | feedback loops matter. Consistency matters. And simplicity
         | matters in many dimensions.
         | 
         | Web application servers that are orders of magnitude (N times)
         | slower than they should be (not even _could_ be) cost us N
         | times more hardware resources, N times more architectural
         | complexity that require specialized workers and tools and so
         | on.
         | 
         | Speed and throughout matter for productivity. Not just ours but
         | our user's as well. Good performance is important for good UX.
         | Wasting fewer cycles opens up opportunities to do meaningful
         | things.
        
           | danuker wrote:
           | > and are very fast and reliable.
           | 
           | Among the most popular languages is Python. It is popular in
           | spite of its bad performance, high memory use, and lack of
           | CPU multithreading.
           | 
           | And it is heavily ran on servers.
           | 
           | Why? Because running Python apps is still much cheaper than
           | hiring humans to wait for calls or manage e-mails.
           | 
           | Humans are valuable. They should not be working on easily
           | automatable problems.
           | 
           | The bottleneck is automating AT ALL, rather than automating
           | with a low machine cost. Only at huge scale (i.e. Big Tech
           | with billions of daily events) does it warrant to optimize
           | the code.
           | 
           | Of course, assuming you have a sane computational complexity.
           | If you don't, it doesn't matter which paradigm you use.
        
             | dgb23 wrote:
             | I think you're right what you said.
             | 
             | But the gist of the video doesn't disagree at all.
             | 
             | In fact the resulting code was very clear, easy to write
             | and understand. He got a 15x improvement by removing
             | indirection and OO cruft. I don't think he's saying "don't
             | use language X" here, but rather "don't make it harder for
             | yourself and the computer".
        
         | jiggawatts wrote:
         | > CPU cycles are much cheaper than developer wages.
         | 
         | Please just stop with this. It's plainly false.
         | 
         | At $dayjob I recommended some simple database query tuning that
         | 1 developer applied in their spare time. This improved
         | performance from 9 seconds per page to 500 milliseconds per
         | page.
         | 
         | That customer wanted to use auto-scale to expand capacity
         | (nearly 20-fold!) to meet the original requirements, which
         | would have cost about $250K annually.
         | 
         | The dev fixed the issue in like... a week.
         | 
         | What developer costs $250K per week!? None. None do. Not even
         | the top tier at a FAANG.
         | 
         | Not to mention the time saved for the _thousands_ of users that
         | use this web application. Their wasted time costs money too.
        
         | itake wrote:
         | I think there are some ethical aspects of this that are
         | missing.
         | 
         | Writing in efficient code has an environmental impacts and a
         | human impact.
         | 
         | I still lean heavily towards clean code, because clean is
         | easier to Grep and bugs have a much higher impact to my career,
         | but efficiency imho should be addressed in the language or
         | compiler, not in the code.
        
       | tippytippytango wrote:
       | I wish software engineering cared a lot more that we have no way
       | of measuring how clean code is. Much less any study that measures
       | the tradeoffs of clean code and other concerns, like a real
       | engineering discipline.
        
         | robalni wrote:
         | The funny thing is that the things that are not possible to
         | measure will be undone all the time because people can't agree
         | on how it should be. This means that there will be wasted time.
         | 
         | First we have to write the code this way. Next year we have to
         | write it in the other way. Then it has to be done in the first
         | way again.
         | 
         | It's so hard to prioritize things when you ask someone why
         | something has to be done the way they say and they are not able
         | to give a real answer. I can do my job when option A means
         | faster program and option B means more memory usage but I can't
         | do my job when option A means faster program and option B is
         | just the way it "should" be done.
        
           | tippytippytango wrote:
           | This happens because software engineering doesn't yet know
           | the difference between principles and morals.
        
       | pshirshov wrote:
       | In 98% of the cases there is no difference between O(N) and
       | O(N/4)
        
       | 0xbadcafebee wrote:
       | "Clean" is a poor descriptor for source code.
       | 
       | The word "clean" in reference to software is simply an indicator
       | of "goodness" or "a pleasant aesthetic", as opposed to the
       | opposite word "dirty", which we associate with undesireable
       | features or poor health (that being another misnomer, that dirty
       | things are unhealthy, or that clean things are healthy; neither
       | are strictly true). "Clean" is not being used to describe a
       | specific quality; instead it's merely "a feeling".
       | 
       | Rather than call code "clean" or "dirty", we should use a more
       | specific and measurable descriptor that can actually be met, like
       | "quality", "best practice", "code as documentation", "high
       | abstraction", "low complexity", etc. You can tell when something
       | meets that criteria. But what counts as "clean" to one person may
       | not to another, and it doesn't actually mean the end result will
       | be better.
       | 
       | "Clean" has already been abandoned when talking about other
       | things, like STDs. "Clean" vs "Dirty" in that context implies a
       | moral judgement on people who have STDs or don't, when in fact
       | having an STD is often not a choice at all. By using more
       | specific terms like "positive", or simply describing what
       | specific STDs one has, the abstract moral judgement and unhelpful
       | "feeling" is removed, and replaced with objective facts.
        
       | softfalcon wrote:
       | In my personal opinion, this is less of an argument of "clean
       | code" vs "performant code" and it seems to be more of traditional
       | "object oriented programming" vs "data driven design".
       | 
       | Ultimately though, data driven design can fit under OOP (object
       | orient programming) as well, since it's pretty much lightweight,
       | memory conforming structs being consumed by service classes
       | instead of polymorphing everything into a massive cascade of
       | inherited classes.
       | 
       | The article makes a good argument against traditional 1980-90's
       | era object oriented programming concepts where everything is a
       | bloated, monolith class with endless inheritances, but that
       | pattern isn't extremely common in most systems I've used
       | recently. Which, to me, makes this feel a lot like a straw man
       | argument, you're arguing against an incredibly out-dated "clean
       | code" paradigm that isn't popular or common with experienced OOP
       | developers.
       | 
       | One only really has to look at Unity's data driven pipelines,
       | Unreal's rendering services, and various other game engine
       | examples that show clean code OOP can and does live alongside
       | performant data-driven services in not only C++ but also C#.
       | 
       | Hell, I'm even doing it in Typescript using consolidated,
       | optimized services to cache expensive web requests across huge
       | relational data. The only classes that exist are for data models
       | and request contexts, the rest is services processing streams of
       | data in and out of db/caches.
       | 
       | If there is one take-away that this article validated for me
       | though, it's that data-driven design trumps most other patterns
       | when performance is key.
        
       | captainmuon wrote:
       | Already in his first example, where he says he doesn't use range-
       | based for in order to help the compiler and get a charitable
       | result, he doesn't get the point, I think. You write code in a
       | certain way _in order_ to be able to use abstractions like range-
       | based for, or functional style. If you are hand-unrolling the
       | loop, or using a switch statement instead of polymorphism, you
       | loose the ability to use that abstraction.
       | 
       | Esentially the whole point of object orientation is to enable
       | polymorphism without having big switch statements at each call
       | site. (That, and encapsulation, and nice method call syntax.)
       | When people dislike object orientation, it's often because they
       | don't get or at least don't like polymorphism.
       | 
       | Most people, most of the time, don't have to think about stuff
       | like cache coherency. It is way more important to think about
       | algorithmic complexity, and correctness. And then, if you find
       | your code is too slow, and after profiling, you can think about
       | inlining stuff or using structs-of-arrays instead of arrays-of-
       | structs and so on.
        
         | Jach wrote:
         | Perhaps some biases can be excused by committing to C++. Using
         | dynamic dispatch in that language seems to be slow when it's
         | pretty much always some vtable lookups under the hood, but it
         | doesn't have to be that way. Implementations of other languages
         | like Smalltalk or Common Lisp automatically apply (or have as
         | options to specify) various strategies to make convenient
         | abstractions a lot more performant. In Java Land the JVM can do
         | many impressive things -- and interestingly, more numbers of
         | smaller size methods helps, as it tends to "give up" if a
         | function is a giant sprawling mess.
         | 
         | A fun story from https://snakeisland.com/aplhiperf.pdf on the
         | utility of people using standard inner product / matrix
         | multiplication operators, instead of hard-coding their own
         | loops or whatever:
         | 
         | > In the late 1970's, I was manager of the APL development
         | department at I.P. Sharp Associates Limited. A number of users
         | of our system were concerned about the performance of the
         | [?].[?] inner product on large Boolean arrays in graph
         | computations. I realized that a permuted loop order would
         | permit vectorization of the Boolean calculations, even on a
         | non-vector machine. David Allen implemented the algorithm and
         | obtained a thousand-fold speedup factor on the problem. This
         | made all Boolean matrix products immediately practical in APL,
         | and our user (and many others) went away very happy.
         | 
         | > What made things even better was that the work had benefit
         | for all inner products, not just the Boolean ones. The standard
         | +.x now ran 2.5--3 times faster than Fortran. The cost of inner
         | products which required type conversion of the left argument
         | ran considerably faster, because those elements were only
         | fetched once, rather than N times. All array accesses were now
         | stride one, which improved cache hit ratios, and so on. So,
         | rather than merely speeding up one library subroutine, we sped
         | up a whole family of hundreds of such routines (even those that
         | had never been used yet!), with no more effort than would have
         | been required for one.
         | 
         | Or, just look at SQL. I sure appreciate not having to write
         | explicit loops querying the correct indexes every time I want
         | to access some data.
        
       | formvoltron wrote:
       | https://www.youtube.com/watch?v=tD5NrevFtbU
        
       | suyjuris wrote:
       | It is also important to consider that better performance also
       | increases your productivity as a developer. For example, you can
       | use simpler algorithms, skip caching, and have faster iteration
       | times. (If your code takes 1min to hit a bug, there are many
       | debugging strategies you cannot use, compared to when it takes
       | 1s. The same is true when you compare 1s and 10ms.)
       | 
       | In the end, it is all tradeoffs. If you have a rough mental model
       | of how code is going to perform, you can make better decisions.
       | Of course, part of this is determining whether it matters for the
       | specific piece of code under consideration. Often it does not.
        
         | Quarr wrote:
         | This is definitely not something that should be overlooked.
         | Choosing a more mathematically optimal algorithm might be 2-3x
         | faster in theory (at the cost of more complexity). If you're
         | executing that algorithm a lot, to the point where a 3x speedup
         | is significant, well -- if you can restructure the code in a
         | manner similar to that demonstrated in the article (avoiding
         | costly vtable dispatches, indirection, etc) and achieve a 25x
         | with the _original_ simpler algorithm, then that 's something
         | worth taking into consideration. A 3x algorithmic improvement
         | is only impressive if there isn't 25x potential speedup low-
         | hanging fruit (from simply not writing your code in a moronic
         | way in the first place).
        
         | robalni wrote:
         | Yes. When I worked as a game engine programmer, two of the
         | first things I did were to improve the speed of compiling and
         | starting the games. When those two things are faster, all
         | development will be faster and more fun.
        
       | eithed wrote:
       | Given that code in editor is for human consumption I wonder why
       | can't it be restructured for compiler to make it fast. (Or why
       | compiler can't make it fast). After all - you could leave
       | annotations re, for example structs so compiler knows what will
       | be their size, so can optimize for it
        
       | ysavir wrote:
       | As a general rule, _optimize for your bottlenecks_.
       | 
       | If you have a large sum of I/O and can see the latency tracked
       | and which parts of the code are problematic, optimize those parts
       | for execution speed.
       | 
       | If you have frequent code changes with an evolving product, and
       | I/O that doesn't raise concerns, then optimize for code
       | cleanliness.
       | 
       | Never reach for a solution before you understand the problem.
       | Once you understand the problem, you won't have to search for a
       | solution; the solution will be right in front of you.
       | 
       | Don't put too much stock in articles or arguments that stress
       | solutions to imaginary problems. They aren't meant to help you.
       | Appreciate any decent take-aways you can, make the most of them,
       | but when it comes to your own implementations, start by
       | understanding your own problems, and not any rules, blog titles,
       | or dogmas you've previously come across.
        
         | rolldat777 wrote:
         | I think this is the point. In the example given, if you
         | introduce a sqrt, then the argument becomes much weaker
         | already. The dispatch is comparable to the computation time.
         | I'm reminded of "Latency Numbers Every Programmer Should Know".
         | 
         | I actually just had this debate with myself, specifically about
         | shape classes including circles and bezier curves. However, the
         | operation was instead intersections. There was zero performance
         | difference in that case after profiling so I kept the OOP so
         | that the code wasn't full of case statements.
        
       | hennell wrote:
       | I think he's really underplaying the main selling point of clean
       | code - the objective of writing clear maintainable, extendable
       | code. His code was faster, but sometimes how it compares for
       | adding new features or fixing bugs by people new to a code base
       | is where you want to optimize.
       | 
       | Should performance be talked about more? Yes. Does this show
       | valuable performance benifits? Also yes. Is performance where you
       | want to start your focus? In my experience, often no.
       | 
       | I've made things faster by simplifying them down once I've found
       | a solution. I've also made things slower in order to make them
       | more extendable. If you treat clean code like a bible of
       | unbreakable laws you're causing problems, if you treat
       | performance as the be-all-end-all you're also causing problems,
       | just in a different way.
       | 
       | It's given me something to think about, but I wish it was a more
       | fair handed comparison showing the trade offs of each approach.
        
         | Johanx64 wrote:
         | > Main selling point of clean code - the objective of writing
         | clear maintainable, extendable code
         | 
         | That's the alleged selling point, whether even that's true is
         | arguable.
         | 
         | Since while performance can be clearly measured - whether the
         | given "clear code" principles do in fact help with writing
         | "maintainable and extendable code" is something that can be
         | argued back and forth for a long time.
        
         | gjadi wrote:
         | I didn't watch the video but I read the article and in the
         | later part he shows how to extend every versions (polymorphic,
         | enum and table-based) to "computing the sum of the corner-
         | weighted areas".
         | 
         | This extension is easy enough with in the non-pessimistic case
         | (table based).
        
         | anonymoushn wrote:
         | If someone is new to the codebase, would you rather they need
         | to open a dozen files to see all of the different virtual
         | functions that could occur at one call site, or open one file?
        
           | KyeRussell wrote:
           | Honestly this itself is a massive oversimplification, and a
           | bit of a strawman, and kinda indicates that you still aren't
           | seeing the nuance.
           | 
           | Yes, if something is understandable in a single file, that's
           | fine. But also appreciate that too many pieces in the same
           | file can also be confusing and disorienting for people. You
           | also have to consider all the cases in which someone would be
           | opening that file.
           | 
           | You're basically posing a situation which by its very
           | description is an exception. If anyone is slavishly following
           | these rules, they're undoubtedly doing it wrong.
           | 
           | Knowing when to apply the patterns is the hard bit. And, in
           | my experience working on codebases worthy of considering
           | these things in the first place, there's much more to
           | consider than what you're posing.
        
           | hennell wrote:
           | As tradeoffs go, I don't see opening a dozen files as a big
           | hurdle. I'd take smaller single purpose files over one big
           | file anyday.
           | 
           | Plus how often do you need to see all functions? If there's a
           | problem in RectangleArea function, you go to that file, no
           | need to look in CircleArea. If you're adding 'StarShapeArea'
           | you only care that it matches the callers expectations, not
           | worrying about the other function logic.
        
           | xnickb wrote:
           | I see your point, but no one uses Windows notepad for coding
           | anymore.
           | 
           | There are far worse crimes than having code structure that
           | spans several files.
        
             | TeMPOraL wrote:
             | > _I see your point, but no one uses Windows notepad for
             | coding anymore._
             | 
             | Did I miss an IDE that inlines all those things for you
             | automatically? Because ones that only give you a "jump to
             | definition", or maybe a one-at-a-time preview in a context
             | popup, are not much better than Notepad++. You still don't
             | get to see all the relevant things at the same time.
        
               | kaba0 wrote:
               | Most real world applications won't have `print "red"` or
               | the like under a specific implementation. Good luck
               | inlining 6 1000 lines-implementations into a single
               | switch statement.
        
               | TeMPOraL wrote:
               | The "project" vs. "external" vs. "system" library
               | conceptual distinction exists for a reason. But of course
               | you want presentation-level inlining to be semi-
               | automated, much like autocomplete and jump to definition.
        
             | Tomis02 wrote:
             | > There are far worse crimes than having code structure
             | that spans several files.
             | 
             | Sure, but what's the advantage of having your code split
             | over several files? Yes, you can jump between them with an
             | IDE, but that's still a disruption, and it makes it harder
             | to see common patterns that can help you simplify the code.
             | 
             | As demonstrated in the video, splitting up the switch into
             | multiple classes only hurts readability.
        
               | kaba0 wrote:
               | You are only seeing a screenful of code at any time, no
               | matter if they are in one file or multiple ones. I much
               | prefer each "section" of code if you will having a
               | specific name/location I can associate it with, than
               | scrolling as a mad-man. Yeah, I know about
               | markers/closable functions/opening multiple sections of
               | the same file/etc, but at that point whose workflow is
               | more complex?
        
       | gwbas1c wrote:
       | Two years ago I was subjected to NDepend: A clean-code checker
       | and enforcer.
       | 
       | Their tool was so dog slow I could see it paint the screen on a
       | modern computer.
       | 
       | I rejoiced when we yanked it out of our toolchain. Most of the
       | advice that it gave was unambiguously wrong.
        
       | devdude1337 wrote:
       | Most C++-projects I dealt with are neither clean nor performant.
       | I rather follow clean code to improve maintainability and get
       | things done than to optimize for performance. It's also easier to
       | find bottlenecks in a well readable and testable code base than
       | in a premature-optimized one. However it is true that the more
       | abstractions and indirections used software gets slower. Also
       | these examples are too basic to make a real-world suggestion:
       | never assume sonething is slow in a large project because of
       | indirection or something...always get a profiler involved and run
       | tests with time requirements to identify and fix slow running
       | parts of a program.
        
       | yarg wrote:
       | He's picking holes in example code. Example code will often tell
       | you how to do a thing, and not why to do that thing.
       | 
       | And he argues that it's not a straw man.
        
         | cma wrote:
         | If he made up his own example it would definitely be called a
         | straw man.
        
       | mulmboy wrote:
       | Really enjoyed watching some guy Breathlessly discover data
       | oriented design https://en.wikipedia.org/wiki/Data-
       | oriented_design
       | 
       | there's nothing novel in this video, really nothing to do with
       | clean code. This is same sort of thing you see with pure python
       | versus numpy
        
         | drainyard wrote:
         | You could spend 2 minutes looking up who you are talking about.
         | 
         | This video from 2015 where Casey is interviewing Mike Acton:
         | https://www.youtube.com/watch?v=qWJpI2adCcs
         | 
         | Also this course where he specifically talks about Numpy and
         | pure Python: https://www.computerenhance.com/p/python-revisited
        
       | theknarf wrote:
       | This seems more like an argument against the object oriented
       | model of C++ than anything else. Would have been more interesting
       | if the performance was compared to languages like Rust.
        
         | abyesilyurt wrote:
         | My key learning is the importance of balancing performance and
         | code cleanliness instead of blindly adhering to clean code
         | principles.
        
           | Ygg2 wrote:
           | If you optimize for readability performance would suffer. If
           | you optimize for performance readability will suffer.
           | 
           | Casey prizes performance over everything else.
        
             | pkolaczk wrote:
             | In this particular case I find the code optimized for speed
             | (the one using switch) to be _also_ more readable and
             | simpler than the code using virtual dispatch.
             | 
             | The problem with virtual calls in a big project is that
             | there is no good way of knowing what is the target of the
             | call, without some additional tooling like IDE. But in case
             | of a switch/if, it is pretty obvious what the cases are.
        
               | karma_fountain wrote:
               | I do agree. I've looked at code bases where the switch
               | was replaced with virtuals and the like. It's kind of
               | hard to navigate around the code when that happens. I
               | think I'd usually rather have a completely separate path
               | through the code rather than switch or virtual, making
               | the decision at the highest level possible, usually the
               | top-level caller.
        
               | Ygg2 wrote:
               | Sure, but what happens, once you want to start supporting
               | other shapes other than basics? Because clean code
               | assumes code will be changed/maintained.
               | 
               | Then you get people writing their own horrible hacks.
               | 
               | Both clean code and performance oriented design have
               | their extremes.
               | 
               | Clean Code has Spring with Proxy/Method/Factory
               | monster... and hyper performance has the extreme in the
               | story of Mel (i.e. read-and-weep only code).
        
               | pkolaczk wrote:
               | > Sure, but what happens, once you want to start
               | supporting other shapes other than basics?
               | 
               | Sure, but what happens, once you want to start supporting
               | more operations on the shapes?
        
               | TeMPOraL wrote:
               | > _Sure, but what happens, once you want to start
               | supporting other shapes other than basics? Because clean
               | code assumes code will be changed /maintained._
               | 
               | You get to push back and ask, is it worth the developer
               | time and predicted 1.5 - 5x perf drop across the board
               | (depending on shape specifics)? In some cases, it might
               | not be. In others, it might. But you get to ask the
               | question. And more importantly, whatever the outcome,
               | you're still left with software that's an order of
               | magnitude faster than the "clean code" one.
               | 
               | Clean code "assumes code will be changed/maintained" in a
               | maximally generic, unconstrained way. In a sense, it's
               | the embodiment of "YAGNI" violation: it tries to make any
               | possible change equally easy. At a huge cost to
               | performance, and often readability. More performant code,
               | written with the approach like Casey demonstrated, also
               | assumes code will be changed/maintained - but constraints
               | the directions of changes that are easy.
               | 
               | In the example from video, as long as you can fit your
               | new shape to the same math as the other ones, the change
               | is trivial and free. A more complex shape may force you
               | to tweak the equation, taking little more time and
               | incurring a performance penalty on all shapes. Even more
               | complex shape may require you to rewrite the module,
               | costing you a lot of time and possibly performance. But
               | you can probably guess how likely the latter is going to
               | be - you're not making an "abstract shape study tool",
               | but rather a poly mesh renderer, or non-parametric CAD,
               | or something else that's _specific_.
        
         | bakugo wrote:
         | I'm surprised I had to scroll down this far to find the
         | obligatory Rust evangelist comment.
        
         | oleg_antonyan wrote:
         | Indeed. If anything this demo shows how badly C++ polymorphism
         | performs. It doesn't necessarly means that all OOP languages
         | created equal. Although I have no data to prove anything, and
         | frankly don't care b/c all these arguments about clean vs dirty
         | code are meaningless in an absence of formally defined rules
         | and metrics universally enforced by some authority that can
         | revoke your sw dev license or something like that
        
           | TeMPOraL wrote:
           | > _It doesn 't necessarly means that all OOP languages
           | created equal._
           | 
           | Exactly - unless you're trying very hard, you're unlikely to
           | beat C++ polymorphism with your OOP code in a different
           | language. Which makes Casey's argument that much stronger.
           | C++ with its relatively unsophisticated OOP and minimal
           | overhead on everything, is as fast as you're going to get, so
           | it's good for showing just how slow that still is if you
           | follow the Uncle Bob et al. Clean Code tradition.
        
         | bombolo wrote:
         | I think it could be ok to have a link every once in a while
         | that doesn't talk about rust.
        
           | zaphodias wrote:
           | We're talking about C++ and performance, no way nobody would
           | mention rust :P
        
       | vore wrote:
       | This guy is so dogmatic about it it hurts. I would argue that
       | clean code is a spectrum from how flexible vs how rigid you want
       | your abstractions to be. If your abstractions are too flexible
       | for good performance, dial them back when you see the issue. If
       | your abstractions are too rigid for your software to be
       | extendable, then introduce indirection.
       | 
       | We can all write code that glues a very fixed set of things end
       | to end and squeeze every last CPU cycle of performance out of it,
       | but as we all know, software requirements change, and things like
       | polymorphism allow for much better composition of functionality.
        
         | athanagor2 wrote:
         | But is it true that "clean code" makes the adaptation to
         | changing requirements easier? I saw a few testimonies saying
         | otherwise.
        
           | vore wrote:
           | I think it really truly depends. I think it's always good to
           | do the minimal viable thing first instead of being an
           | architecture astronaut, but if you've been asked for three
           | (random ballpark number) different implementations for the
           | same requirement it might be time to start adding some
           | indirection.
        
           | mattgreenrocks wrote:
           | The best idea in clean code is to stop coupling domain models
           | to implementation details like databases/the web/etc. Once
           | you grok that, then you're in a better position to work on
           | eliminating unnecessary coupling within the model itself.
           | 
           | There's lots of ways to do this poorly and well. There's no
           | process for it. That's a feature. I feel like a lot of the
           | flak clean code gets boils down to, "I followed it
           | dogmatically and look what it made me do!" It didn't make you
           | do anything; it's trying to teach you aesthetics, not a
           | process. Internalize the aesthetics and you won't need a
           | rigid process.
           | 
           | Obviously when you do this you probably need more code than
           | you'd normally write. That can be viewed as a maintenance
           | burden in some situations, esp. when you don't have product
           | market fit. Again, this shows that treating clean code like
           | some process that always produces better code in every
           | situation is extremely naive.
        
         | overgard wrote:
         | I think part of the problem with supposed "clean" code is that
         | it tends to be a matter of opinion. Is the polymorphic version
         | cleaner than the switch statement version? I would argue the
         | latter is actually easier to read. There's no real reason to
         | think "clean" code is actually clean other than anecdotes and
         | that someone wrote it in a book, but the performance is
         | something that can be objectively measured.
        
         | hinkley wrote:
         | Spectrum or I believe more of a Venn Diagram.
         | 
         | There was a moment in grade school where I was sat down and it
         | was explained to me that you don't have to take a test in
         | order. You can skip around if you want to, and I ran so far
         | with that notion that at 25 I probably should have written a
         | book on how to take tests, while I could still remember most of
         | it.
         | 
         | One of the few other "lightning bolt out of the blue"
         | experiences I can recall was realizing that some code
         | constructs are easier for both the human _and_ the compiler to
         | understand. You can by sympathetic to both instead of
         | compromising. They both have a fundamental problem around how
         | many concepts they can juggle at the exact same time. For any
         | given commit message or PR you can adjust your reasoning for
         | whichever stick the reviewer has up their butt about justifying
         | code changes.
        
         | Kukumber wrote:
         | 0 responsibility developers
         | 
         | you have Japan infrastructure, and you have Turkey
         | infrastructure
         | 
         | 6.1 quake in Japan = nothing destroyed
         | 
         | 6.1 quake in Turkey = everything collapses
         | 
         | The engineers in Turkey probably didn't value performance and
         | efficiency
         | 
         | It's the same for developers, you choose your camp wisely,
         | otherwise people will complain at you if they can no longer
         | bear your choice
         | 
         | You act like innocent, but your code choice translate to a cost
         | (higher server bill for your company, higher energy bill for
         | your customers/users, time wasted for everyone, depleting rare
         | materials at a faster rate, growing tech junk)
         | 
         | Selfishness is high in the software industry
         | 
         | We are lucky it's not the same for the HW industry, but it's
         | getting hard for them to hide your incompetence, as more things
         | now run on a battery, and the battery tech is kinda struggling
         | 
         | Good thing is they get to sell more HW since the CPU is
         | "becoming slower" lol
         | 
         | So we now got smartwatches that one need to recharge every damn
         | day
        
           | ilyt wrote:
           | > The engineers in Turkey probably didn't value performance
           | and efficiency
           | 
           | Uh, no. It was corruption. They were standards, that worked,
           | but people didn't do it, plain and simple.
        
         | TheFattestNinja wrote:
         | Casey is a bit of a hardcore crusader on the topic, but I'd
         | hardly call dogmatic someone who can provide you evidence and
         | measurements backing their thesis.
         | 
         | The tests he put together here are hardly something I'd call a
         | straw-man argument, they seem like reasonable simplification of
         | real-cases.
        
           | vore wrote:
           | Performance measurements are only one dimension of code
           | quality. Having a laser focus on it disregards why you would
           | want to sacrifice performance for a different dimension of
           | code quality, such as extensibility for different
           | requirements.
           | 
           | You should check if your code is in the hot path before
           | optimizing, because the more you couple things together the
           | harder it is to change it around. For instance, in Casey's
           | example, if you wanted to add a polygon shape but you've
           | optimized calculating area into multiplying height x width by
           | a coefficient, that requires a significant refactor. If you
           | are sure you don't need polygons, that's a perfectly fine
           | optimization. But if you do, you need to start deoptimizing.
        
           | ajmurmann wrote:
           | Unfortunately, everything in our profession is a tradeoff.
           | Faster and maintainable are two of the many quality metrics
           | you can optimize for that will be at odds at times. What the
           | right balance is for a given piece of code depends on so much
           | context. It's a hard balance to get right.
        
           | st3fan wrote:
           | Evidence in a micro benchmark of a single page of code.
           | 
           | The focus on performance here ignores the fact that most
           | programs are large systems of many things that interact with
           | each other. That is where good design and abstractions and
           | "clean code" can really help.
           | 
           | Like all things it is about finding a balance and applying
           | the right techniques to the right parts of a larger system.
        
             | gjadi wrote:
             | I'd say that it really depends on what the code needs to
             | do.
             | 
             | For example, if the code is to be distributed and extended
             | as a third-party library, then the class hierarchy is
             | probably a better fit to allow extensibility.
             | 
             | But if the purpose of the API is to compute the area of
             | given shapes (as in the example), then it makes sense to
             | make it efficient and there is no use to provide
             | extensibility to the outside world.
             | 
             | The advocates of "clean code" that Casey mentions will go
             | for extensibility no matter the use.
             | 
             | I find it very interesting to have numbers to weight what
             | you are leaving when you go the extensibility route instead
             | of the non-pessimistic route.
        
               | st3fan wrote:
               | > The advocates of "clean code" that Casey mentions will
               | go for extensibility no matter the use.
               | 
               | Unfortunately this is just some catchy stereotyping that
               | probably doesn't match reality.
        
               | gjadi wrote:
               | I don't know. Before reading Casey, I'd say most, if not
               | all, of my APIs would look like the "clean code" style.
               | 
               | The process was "think about the model, design a class
               | hierarchy that fits with the model, add the operations".
               | Even if I don't think about extensibility, that's how I
               | thought my code.
               | 
               | Why? Because when I think about performance, I used to
               | think about algorithms and I/O optimizations (for example
               | batching).
               | 
               | Now I'll look into this data-oriented programming-thing
               | and see how that applies to my platform (embedded Java)
               | :D
        
               | jstimpfle wrote:
               | The code examples with the abstract / virtual methods for
               | the most straightforward concepts are a clown show. Not
               | cleanliness but madness.
        
             | TheFattestNinja wrote:
             | True, the example is simplistic, but that's just to make it
             | fit within reasonable exposition. The author has in the
             | past shown (elsewhere) how his techniques can actually make
             | dramatic differences in more concrete examples (he rose to
             | some Internet fame for building a performant shell that
             | could actually handle larger outputs orders of magnitude
             | better than most available alternatives).
             | 
             | To me the interesting point is the reminder that there is
             | an innate tension between going fast and being "clean"
             | (i.e. maintanable/understandable). And once you are aware
             | of it you can make your decisions in an informed way. Too
             | often this tension is forgotten/ignored/dogmatically put to
             | the back ("performance doesn't matter over cleanliness" and
             | the likes).
             | 
             | Mind you, I'm also of the camp that performance is very
             | secondary to cleanliness in modern enterprises, but I
             | appreciate a reminder of just how much we are sacrificing
             | on this altar.
        
               | cma wrote:
               | It was Windows Terminal he wrote something faster than.
               | Windows terminal was doing a whole GPU draw call per
               | character (at 2x standard terminal height, 160x25, it was
               | as many draw calls as a AAA game).
        
               | gongle wrote:
               | No matter what design choices you argue for there are
               | always ways to be an idiot about the implementation
        
           | Barrin92 wrote:
           | > they seem like reasonable simplification of real-cases.
           | 
           | Paraphrasing Russ Ackoff, doing the right thing and doing a
           | thing right is the difference between wisdom or effectiveness
           | and efficiency. What Casey is doing here may be efficient,
           | but calculating a billion rectangles doesn't present a
           | realistic or general use case.
           | 
           | "Clean Code" or any paradigm of the sort aims to make
           | _qualitative_ , not quantitative improvements to code. What
           | you gain isn't performance but clarity when you build large
           | systems, reduction in errors, reduce complexity, and so on.
           | Nobody denies that you can make a program faster by manually
           | squeezing performance out if it. But that isn't the only
           | thing that matters, even if it's something you can easily
           | benchmark.
           | 
           | Looking at a tiny code example tells you very little about
           | the consequences of programming like this. If we program with
           | disregard for the system in favour of performance of one of
           | its parts, what does that mean three years down the line, in
           | a codebase with millions of lines of code?`That's just one
           | question.
        
         | anonymoushn wrote:
         | Is there any flexbility tradeoff at all here?
        
           | vore wrote:
           | The shapes example is pretty contrived so I don't really have
           | an opinion on it either way. But imagine you have something
           | like a File interface and you have implementations of it e.g.
           | DiskFile, NetworkFile, etc., and you anticipate other
           | implementors. Why would you do anything other than have a
           | polymorphic interface?
        
             | anonymoushn wrote:
             | I don't know, I've never done anything where code needed to
             | have runtime dispatch on the kind of file it has without
             | also knowing anything about what kind of file it has.
        
               | vore wrote:
               | Have you never used C++ iostreams? Or the Python file
               | abstraction? Or Rust std::io::Read/std::io::Write? Or
               | Node.js streams? Or DOM Web Streams? Or Ruby files? Or
               | Haskell conduits? Or hell, fopencookie/funopen in C?
               | 
               | The abstraction is super common and allows you to connect
               | streams to each other without worrying about the
               | underlying mechanism, which 99% of the time I don't
               | really think you want to worry about unless you're sure
               | it's a performance overhead. And that's great, because I
               | surely don't want to write specializations by hand for
               | all the different combinations of streams I need to use
               | if I don't have to.
        
               | anonymoushn wrote:
               | I've used open(2) and the io_uring openat :)
               | 
               | I use generic readers and writers every day, but they
               | don't have any runtime dispatch. The question was "Why
               | would you do anything except vtables and runtime
               | dispatch?" One answer is that my code that uses generic
               | readers and writers and gets monomorphized gets to also
               | be generic over async-ness.
        
               | saagarjha wrote:
               | Except in the kernel, of course.
        
               | vore wrote:
               | There you go, actually: you can have your clean code cake
               | and eat it too if you have the right language
               | abstractions (like parametric polymorphism).
        
             | mgaunard wrote:
             | Anything that yields data in chunks will not really be
             | affected by runtime polymorphism much.
             | 
             | It's only a problem when it's done per datum.
        
             | saidinesh5 wrote:
             | I think the shapes example is more of a dig at various game
             | engines where you end up with a long trees of inheritance
             | (physicsbody -> usercontrollable physics body -> renderable
             | user controllable physics body etc..) as opposed to the
             | recent trend of using something like an Entity Component
             | System.
             | 
             | Also I think he isn't specifically against "clean code" but
             | how the first tool used by various "clean code advocates"
             | seems to be polymorphism via inheritance. I have seen this
             | enough in lots of Java codebases and "Enterprise C++
             | codebases". "We need X." "Oh first I will create an
             | Abstract Base class for X, then create X, so we can reuse X
             | nicely elsewhere when we need it". It is still on the
             | developers to understand that they may not even need it but
             | for them it is "clean code".
        
             | zelos wrote:
             | That does depend on how the abstraction is defined, of
             | course. I once worked on optimising a 2D Canvas C++ class
             | which had a nice top level virtual interface so you could
             | replace it with a different implementation. It was also
             | crazily slow because it defined:                 virtual
             | void setPixel(int x, int y, int color) = 0;
             | 
             | and then implemented flood fill etc in terms of that.
        
         | jasmer wrote:
         | I don't think rich abstracions necessarily contribute to 'clean
         | code' in fact often the opposite.
         | 
         | Clean code means easy to read, maintain, not full of arbitrary
         | things 'because performance'.
        
       | KyeRussell wrote:
       | David Farley's new book is good. It advocates for the tenants of
       | "clean code" (at least in all lowercase), but given his
       | background I trust that he knows how to balance performance and
       | code hygiene.
       | 
       | There are people that are wrong on both extremes, obviously. I've
       | worked with one too many people that quite clearly have a
       | deficient understanding of software patterns and try to pass it
       | off as being contrarian speed freaks. Just as I've worked with
       | architecture astronauts.
       | 
       | I'm particularly skeptical of YouTubers that fall so strongly on
       | this side of the argument because there's a glut of "educators"
       | out there that haven't done anything more than self-guided toy
       | projects or work on startups whose codebase doesn't need to last
       | more than a few years. Not to say that this guy falls into those
       | two buckets. I honestly don't think I know him at all, and I'm
       | bad with names. So I'm totally prepared for someone to come in
       | and throw his credentials in my face. I can only have so much
       | professional respect for someone that is this...dramatic about
       | something though.
        
         | runevault wrote:
         | He's a gamedev by trade who also used to work at RAD tools back
         | in the day. You can argue he doesn't know how to work in large
         | teams because I don't think he has, but when it comes to
         | understanding CPUs (or GPUs) as well as a dev can he's about as
         | capable as anyone on that front.
         | 
         | He is very much aggressive to a degree I am not a fan of, but
         | when it comes to calling out bad practices, I find him more
         | right than wrong. But he is terrible at delivering his message
         | in a way that won't ensure anyone who didn't agree with him
         | already will get pissed off.
        
       | pron wrote:
       | Much of this is very compiler dependent. For example, Java's
       | compiler is generally able to perform more aggressive
       | optimisations, and even virtual calls are often, and even
       | usually, inlined (so if at a particular call-site only one shape
       | is encountered, there won't even be a branch, just straight
       | inline code, and if there are only two or three shapes, the call
       | would compile to a branch; only if there are more, i.e. a
       | "megamorphic" call site will a vtable indirection actually take
       | place). There is no general way of concluding that a virtual call
       | is more or less costly than a branch, but the best approximation
       | is "about the same."
       | 
       | Having said that, even Java now encourages programmers to use
       | algebraic data types when "programming in the small", and
       | OOP/encapsulation at module boundaries:
       | https://www.infoq.com/articles/data-oriented-programming-jav...
       | though not for performance reasons. My point being is that the
       | "best practice" recommendations for mainstream language does
       | change.
        
         | pkolaczk wrote:
         | > and even virtual calls are often, and even usually, inlined.
         | 
         | Last time I checked it could not inline megamorphic call sites,
         | evn if implementations were trivial (returning constants). At
         | the same time I saw C++ compilers able to replace an analogue
         | switch that dispatched to constants with a simple array lookup,
         | with no branching at all.
        
           | pron wrote:
           | If I'm not mistaken, C2 inlines up to three targets, but of
           | course, as a JIT, it inlines more aggressively than an AOT
           | compiler, as it does not require a soundness proof.
        
       | pkolaczk wrote:
       | Most of those Clean code rules are BS.
       | 
       | 1. Prefer polymorphism to "if/else" and "switch" - if anything,
       | that makes code _less_ readable, as it hides the dispatch
       | targets. Switch /if is much more direct and explicit. And
       | traditional OOP polymorphism like in C++ or Java makes the code
       | extensible in one particular dimension (types) at the expense of
       | making it non-extensible in another dimension (operations), so
       | there is no net win or loss in that area as well. It is just a
       | different tool, but not better/worse.
       | 
       | 2. Code should not know about the internals of objects it's
       | working with - again, that depends. Hiding the internals behind
       | an interface is good if the complexity of the interface is way
       | lower than the complexity of the internals. In that case the
       | abstraction reduces the cognitive load, because you don't have to
       | learn the internal implementation. However, the _total
       | complexity_ of the system modelled like that is _larger_ , and if
       | you introduce too many indirection levels in too many places, or
       | if the complexity of the interfaces/abstractions is not much
       | smaller than the complexity they hide, then the project soon
       | becomes an overengineered mess like FizzBuzz Enterprise.
       | 
       | 3. Functions should be small - that's quite subjective, and also
       | depends on the complexity of the functions. A flat (not nested)
       | function can be large without causing issues. Also going another
       | extreme is not good either - thousands of one-liners can be also
       | extremely hard to read.
       | 
       | 4. Functions should do one thing - "one thing" is not well
       | defined; and functions have fractal nature - they appear do more
       | things the more closely you inspect them. This rule can be used
       | to justify splitting _any_ function.
       | 
       | 5. "DRY" - Don't Repeat Yourself - this one is pretty good, as
       | long as one doesn't do DRY by just matching accidentally similar
       | code (e.g. in tests).
        
         | smcl wrote:
         | I think if someone takes _any_ of these typical guidelines -
         | clean, SOLID, REST etc - and mindlessly applies it they're
         | likely to end up with a few parts of their applications which
         | look weird, or perform poorly or end up being worse somehow.
         | This is because there are inevitably going to be situations
         | where the guidelines don't fit well - they're not necessarily
         | hard and fast rules after all.
         | 
         | Any time you have these common rules of thumb in _any_ part of
         | your life you need to evaluate whether or not they are
         | appropriate. But just because they're not infallibly universal,
         | doesn't mean they're wrong, it just means life throws complex
         | situations at us sometimes, and that we need to be pragmatic
         | and flexible
        
         | sebstefan wrote:
         | In his small example he already added unforeseen couplings that
         | could get out of hands if it was a big codebase. If you follow
         | the principle "switch statements over [X]", try to add a new
         | shape down the line and see how quickly you run into problems.
         | 
         | In the clean code version, your compiler will remind you to
         | implement calculateArea, calculateNumberOfVertices,
         | calculateWhatever, and so on and so forth.
         | 
         | With his version you have to add a new `case` to every switch
         | statement and hope you didn't miss one with a default case,
         | because the compiler won't catch it.
         | 
         | You don't need virtual functions and polymorphism to remove his
         | switch statements. Just compose over inheritance.
        
           | hbrn wrote:
           | > he already added unforeseen couplings that could get out of
           | hands if it was a big codebase
           | 
           | And one should be able to fix things that get out of hand
           | _when_ they start getting out of hand. Architecture should be
           | based on current facts, not on our fantasies about the
           | future.
           | 
           | For some reason we're always making this weird assumption
           | that future engineers working on a problem are going to be
           | less capable than us. So we decouple things _in advance_ for
           | them. Those future idiots won 't know how to architect for
           | scale, but we, today, with our limited domain expertise, know
           | better.
           | 
           | The reality is that in most cases we are those future
           | engineers. And, unsurprisingly, "future we" tend to know
           | more, not less.
        
           | pkolaczk wrote:
           | > With his version you have to add a new `case` to every
           | switch statement and hope you didn't miss one with a default
           | case, because the compiler won't catch it.
           | 
           | The compiler not catching it is a limitation of the language
           | he uses, not the limitation of the general concept of switch
           | / pattern matching. Scala, Haskell, Rust do catch those.
           | 
           | > If you follow the principle "switch statements over [X]",
           | try to add a new shape down the line and see how quickly you
           | run into problems.
           | 
           | And who said you'd ever need to add a new shape? Maybe you
           | will need to add a new operation? Try to add a new operation
           | `calculateWhatever` and see how many places of the code you
           | need to chnage instead of just adding one new function with a
           | switch.
           | 
           | Often you really don't know in which direction the code will
           | evolve. Most often you can't guess the coming change, so
           | don't make the code more complex _now_ in order to make it
           | simpler in the future (which may never come).
        
             | sebstefan wrote:
             | >The compiler not catching it is a limitation of the
             | language he uses, not the limitation of the general concept
             | of switch / pattern matching. Scala, Haskell, Rust do catch
             | those.
             | 
             | You're thinking of defaultless switch statements. Rust and
             | Typescript catch those issues as long as you don't add a
             | default. I'd already thought of it when I typed it
             | 
             | >And who said you'd ever need to add a new shape?
             | 
             | o_o
             | 
             | >Maybe you will need to add a new operation? Try to add a
             | new operation `calculateWhatever` and see how many places
             | of the code you need to chnage instead of just adding one
             | new function with a switch.
             | 
             | You do realize that your "one function with a switch" does
             | have to cover every case exactly like the clean code
             | version, it's not magic you're just fitting it all into one
             | switch/case, and you'll probably end up extracting those
             | case: blocks into separate functions anyway.
             | 
             | And on top of that you're not safe from a colleague adding
             | a useless "default" case at the end ouf ot paranoia and
             | making your compiler not catch a future problem when a new
             | shape gets added.
        
               | pkolaczk wrote:
               | > You're thinking of defaultless switch statements. Rust
               | and Typescript catch those issues as long as you don't
               | add a default. I'd already thought of it when I typed it
               | 
               | But the same problem exists with interfaces and virtual
               | dispatch! If you provide a default implementation at the
               | interface / abstract class level, then the compiler _won
               | 't_ tell you forgot to implement that method, because it
               | would see the default one exists.
        
               | Tenticles wrote:
               | You are not safe from a colleague adding a dozen classes
               | to add two numbers if you are enabling the "clean" crowd
               | either. When I was at uni I was obsessed with these clean
               | ideas and how they would allow big teams to work. after
               | years of working I have seen these ideas fail to actually
               | make teams work in an harmonious manner, worst hit I have
               | experienced against these ideologies was finally working
               | in a place without them and seeing how a lot of the
               | alleged "advantages" of "clean code" could be achieved by
               | other simpler means and how much in the way these
               | ideologies are of actually being productive.
               | 
               | Productivity is achieved in spite of clean, not thanks to
               | it.
        
               | sebstefan wrote:
               | >You are not safe from a colleague adding a dozen classes
               | to add two numbers
               | 
               | The difference is, adding "default" cases to complete
               | switch statements happens all the time, you've probably
               | witnessed it, I know I have, whereas colleagues adding a
               | dozen classes to add two numbers doesn't.
               | 
               | You're also never safe from meteorites crashing your
               | building, let's talk about real antipatterns
        
         | kps wrote:
         | > Prefer polymorphism to "if/else" and "switch" - if anything,
         | that makes code less readable, as it hides the dispatch
         | targets.
         | 
         | Worse, I think, is that it hides the _condition_ , which can be
         | arbitrarily far away in space and time.
         | 
         | Certainly dynamic dispatch can be very useful, and the
         | abstraction can be clearer than the alternatives. A rule of
         | thumb is to consider whether you'd do it if you were writing in
         | C, using explicit tables of function pointers. If that would be
         | clearer than conditional statements, do it.
         | 
         | (In case it isn't obvious, I'm talking about languages in the
         | C++/Java vein here.)
        
       | garganzol wrote:
       | The article gives an advice from the past. Nowadays it is all
       | about zero-cost abstractions and automatic optimization. These
       | trends will only solidify in the future defining the new norm.
       | And until that future fully arrives, optimize for your
       | bottlenecks.
        
       | MikeCampo wrote:
       | That was enjoyable and I'm happy to see it triggering the
       | basement dwellers.
        
       | vborovikov wrote:
       | What is the author suggesting? To write software using infinite
       | loops changing global state? Makes sense for video games but not
       | for the custom enterprise software where clean code practices are
       | usually applied.
       | 
       | The enterprise code must be easy to change because it deals with
       | the external data sources and devices, integration into human
       | processes, and constantly changing end-user needs. Clean code
       | practices allow that, it's not about CPU performance and memory
       | optimizations at all.
        
         | Johanx64 wrote:
         | >The enterprise code must be easy to change because it deals
         | with the external data sources and devices, integration into
         | human processes, and constantly changing end-user needs. Clean
         | code practices allow that, it's not about CPU performance and
         | memory optimizations at all.
         | 
         | There are no good metrics that measure how "clean code"
         | (atleast the given rules) make the code easier or harder to
         | change and maintain.
         | 
         | All the Java style "enterprise type code" from my experience is
         | bloated, full of boilerplate getters and setters and all sorts
         | of abstractions that often make things harder and not easier to
         | understand/maintain, etc.
         | 
         | However CPU performance is easy to measure, and sticking to
         | "clean code" rules as given in the video demonstrably sets you
         | back a decade in hardware progress/makes the code run 10x
         | slower.
         | 
         | > Clean code practices allow that
         | 
         | This is what you believe, not something you can actually
         | measure as far as I know
        
       ___________________________________________________________________
       (page generated 2023-02-28 23:00 UTC)