[HN Gopher] Test smarter, not harder (2020)
       ___________________________________________________________________
        
       Test smarter, not harder (2020)
        
       Author : bubblehack3r
       Score  : 60 points
       Date   : 2022-10-17 09:49 UTC (13 hours ago)
        
 (HTM) web link (lukeplant.me.uk)
 (TXT) w3m dump (lukeplant.me.uk)
        
       | ActionHank wrote:
       | I've been on both types of projects.
       | 
       | A big side effect of writing tests extensively is that the code
       | is often more thoughtfully written and in smaller chunks. No one
       | wants code that is hard to test or large enough that it requires
       | extensive mocking and faking.
       | 
       | Gut feel is that the code quality could be better on more deeply
       | tested code.
        
         | Veuxdo wrote:
         | > No one wants code that is hard to test
         | 
         | There are cases when testability shouldn't be a concern. If
         | you're writing code iteratively or in an agile way, for
         | instance. Nothing is worse than CI/CD rejecting your
         | experimental PR for lack of unit test coverage.
        
       | itsmemattchung wrote:
       | While I think writing tests for the "happy" path builds good
       | momentum, especially for a code base with no existing tests, the
       | real value lies in writing tests that exercise the edge cases.
       | Guy Steele put it best:
       | 
       | "I spent a lot of time worrying about edge cases. That's
       | something I learned from Trenchard More and his array theory for
       | APL. His contention was that if you took care fo the edge cases
       | then the stuff in the middle usually took care of itself."
        
       | a_c wrote:
       | I see testing as a partial solution to the more general question
       | - does this thing work. In software product development, someone
       | comes up a problem to which the solution is to build something,
       | someone then build it, someone then test it to make sure it
       | functionally works, then someone test whether they are addressing
       | the right problem. Someone can be same guy or split into
       | different teams. But I duplicated effort in answering the
       | question of "does this thing work"
        
       | ChrisMarshallNY wrote:
       | I enjoyed it.
       | 
       | I tend to use test harnesses. I write about my approach, here:
       | https://littlegreenviper.com/miscellany/testing-harness-vs-u...
       | 
       | That said, my testing code generally dwarfs my implementation
       | code; whether harness or unit. Part of the reason is that I spent
       | 27 years at a company, where Quality was a religion, and it was
       | very important, not to faceplant in public.
       | 
       | I learned to write very effective tests, very quickly. Most of my
       | test harnesses are "full-fat" applications, that many people
       | would want to ship. I'll often throw them together in a day or
       | so. It helps me to practice for The Rilly Big Shoo.
       | 
       | I use almost every line of code I write, in shipping software, so
       | it is a very practical approach.
        
       | thisiswrongggg wrote:
       | "Don't listen to Uncle Bob! "
       | 
       | +1
       | 
       | In general, I've learned to listen to people who actually do
       | something rather than people writing books and giving shows about
       | that something.
        
         | holri wrote:
         | "Seek out the ones who create and are willing to share
         | knowledge Beware of the 'authorities' who don't themselves
         | create"
         | 
         | Exceprt from "Music Poetry" written by Chick Corea
        
       | cube00 wrote:
       | _Don 't write tests for things that can be more effectively
       | tested in other ways, and lean on other correctness methodologies
       | as much as possible. These include: code review_
       | 
       | I'll take tests any day over the luck of getting an individual to
       | pick up flaws in the code review.
        
         | eyelidlessness wrote:
         | I agree, and I work on a team with impressively thorough review
         | habits. The obvious reason is that tests allow reviewers to
         | focus on more substantive questions than what the code behavior
         | _is_ (eg "is this the desired behavior?"). A less obvious
         | reason, but more compelling to me, is that code which hasn't
         | changed is very unlikely to be reviewed in the context of
         | changes under review, even if the unchanged code should have
         | changed.
        
         | kortex wrote:
         | They are complementary. I've had code reviews pick up gaps in
         | unit tests (e.g. the code is covered, but the state space
         | isn't), and I've had code reviews miss some absolute doozies.
         | Also code review is as much about sharing knowledge and
         | reducing bus factor (if not more so) than acting like some
         | great filter.
        
       | Veuxdo wrote:
       | But I have it on good authority I need 90% unit test coverage for
       | branches, functions, and lines, and this is what makes code
       | "quality".
        
       | tetha wrote:
       | Yeah, we've made the mistake of testing too much in our last
       | configuration management. A test suite can be strangling if it
       | starts to restrict every single change to the code base, unless
       | you touch like 3-4 tests per configuration file deployed to a
       | server.
       | 
       | At this point, most of our tests are much higher level by default
       | - setup 3 nodes, run the config management against it, check if
       | the cluster formed. If the cluster formed, the setup from the
       | config management can't be that wrong.
       | 
       | From there, we're adding tests in an outage/bug-driven way. If it
       | doesn't deploy and load a config correctly into the application,
       | that's a test to write. Or, we recently almost ran into an outage
       | because some failure earlier on caused the config management to
       | start tearing the cluster apart because of missing metadata. So
       | we added a test that it cancels the rollout in such a case.
       | 
       | Those tests are valuable, because we can rely on these issues not
       | cropping up again. Some test breaking on yaml formatting isn't as
       | much.
        
         | Veuxdo wrote:
         | Are your tests for "the configuration is correct" or "the
         | configuration file was found and read"...?
        
           | tetha wrote:
           | In our old config management, we would indeed go ahead and
           | parse configuration files and check values within the
           | configuration files - even values entirely unrelated to the
           | test at hand. Then you'd go ahead and change some default
           | value in the config management because now you understand the
           | system more and a whole bunch of unrelated tests would start
           | failing.
           | 
           | By now, we mostly do a quick syntax check of the config and
           | the rest is usually handled by smoke-testing the application.
           | The config files are usually only parsed if they contain some
           | non-trivial degree of dynamic computation.
        
       | lytefm wrote:
       | > Only write necessary tests -- specifically, tests whose
       | estimated value is greater than their estimated cost.
       | 
       | Definitely. Ensuring that basic functionality works and that no
       | awful regressions are introduced, using high-level tests (e2e,
       | integration, based on realistic test data) is great. Such tests
       | allow to refactor refactor with sufficient confidence.
       | 
       | Having hundreds of tests for trivial functions that might not
       | even be relevant in day to day use for most customers which need
       | to be rewritten or thrown away when refactoring - not so much.
        
       ___________________________________________________________________
       (page generated 2022-10-17 23:02 UTC)