[HN Gopher] Cross-Branch Testing
       ___________________________________________________________________
        
       Cross-Branch Testing
        
       Author : azhenley
       Score  : 15 points
       Date   : 2021-07-03 19:03 UTC (1 days ago)
        
 (HTM) web link (www.hillelwayne.com)
 (TXT) w3m dump (www.hillelwayne.com)
        
       | Traster wrote:
       | >Unit testing. Problem is that if the error is uniformly
       | distributed then each unit test only has a 1%-ish chance of
       | finding the bug. We need to test a lot more inputs.
       | 
       | Not if you're writing your unit tests correctly. Thought should
       | go into what you're testing. It might be true that you're going
       | to find an error in 1% of cases, but you shouldn't be shooting
       | random shit into your test. Or in other words you should
       | "constrain" your "random test". I find it difficult to think of a
       | situation where you actually don't know any helpful mathematical
       | properties of your code - since you've obviosuly written the code
       | with an intention to perform some function. There may be good
       | reason for doing this stupid rotations and powers etc. But at the
       | end of the day there was some non-code motviation for writing
       | this code and _that_ is what you should be thinking about when
       | writing your unit tests.
       | 
       | What the cross-branch testing gives you is an assurance that your
       | code is as broken as when you started. Now, think about what is
       | happening here when you test across branches, you have your
       | golden reference model (which might or might not be right) and
       | you have some python packages that you _hope_ are doing a better
       | job than you at constrained random testing. In fact, their odds
       | of hitting that 1% are no better than your unit tests.
       | 
       | I actually think to some extent testing against your previous
       | implementation has value above the unit test - because my
       | experience is that the first person to write the code probably
       | thoroughly manually tested it and didn't convert all those tests
       | into unit tests, and therefore your confidence in the original
       | implementation is higher than revisions, but the lesson to learn
       | from that is to make sure when you first implement something you
       | push as much of your verification into unit tests.
        
       | chris_engel wrote:
       | So... how do I know the other branch was correct without tests?
        
         | pydry wrote:
         | You don't. Not without further investigation. You need to
         | figure out _why_ they are different.
         | 
         | Sometimes this technique catches bugs on prod that are
         | unknowingly fixed with refactoring.
        
         | ElViajero wrote:
         | The test is done so you verify that your new code has not any
         | side effect that had changed previous behavior.
         | 
         | How do you know that your tests are correct? The problem is
         | equivalent.
        
       | pydry wrote:
       | Isnt this just golden master testing?
        
       ___________________________________________________________________
       (page generated 2021-07-04 23:01 UTC)