[HN Gopher] What Is Property Based Testing?
       ___________________________________________________________________
        
       What Is Property Based Testing?
        
       Author : rahimnathwani
       Score  : 50 points
       Date   : 2021-09-19 19:00 UTC (3 hours ago)
        
 (HTM) web link (hypothesis.works)
 (TXT) w3m dump (hypothesis.works)
        
       | tmoertel wrote:
       | I like to contrast traditional case-based testing with property-
       | based testing like this:
       | 
       | With both approaches, you want to show that certain claims hold
       | about the software you are testing. With case-based testing, the
       | claims are implicit (implied by your cases), and the cases are
       | explicit. With property-based testing, the claims are explicit
       | (your property specifications) and the cases are implicit (being
       | some fuzzed subset of the claim's domain).
        
       | iso8859-1 wrote:
       | The article separates types from property testing, I can
       | sympathize, it is true, it makes sense.
       | 
       | But testing is a tool for reliability to be used in conjunction
       | with so many other tools like, you guessed it, typing.
       | 
       | In 2016 when this article was written, typing wasn't yet popular
       | in Python. But now it is, and indeed more popular than property
       | testing will ever be.
       | 
       | But Hypothesis doesn't support typing very well, though they do
       | attempt [0]. But just take a look at that list of caveats. It is
       | a hack job in comparison.
       | 
       | Even if you don't generate your Strategies from types, Hypothesis
       | requires you to know whether the type you construct is internally
       | a sum type or a product type. [1] The testing framework shouldn't
       | have to know these details, this is unnecessary complexity.
       | 
       | If you think the QuickCheck strategy of having lawless unnamed
       | instances is bad, there is now a Haskell library[2] where Gens
       | are values, like they are in Hypothesis (where they are called
       | Strategies). It probably became popular only after 2016, which
       | could be why the article doesn't mention it. Hypothesis also
       | supports "registering" strategies so that they are auto-
       | discovered on-demand. It's done at runtime of course, unlike with
       | QuickCheck, where it is type class based, as previously
       | mentioned.
       | 
       | I applaud the article for cementing that property checking can
       | work without typing, but I worry that readers will conclude that
       | Haskell does it wrong, which is a false implication. You don't
       | _have_ to use ad-hoc polymorphism for discovering strategies.
       | 
       | [0]:
       | https://hypothesis.readthedocs.io/en/latest/data.html#hypoth...
       | 
       | [1]:
       | https://github.com/HypothesisWorks/hypothesis/issues/2693#is...
       | 
       | [2]: https://hackage.haskell.org/package/hedgehog#readme
        
       | laerus wrote:
       | Also checkout schemathesis[1], that is build on top of
       | hypothesis, for API testing.
       | 
       | [1] https://github.com/schemathesis/schemathesis
        
       | Normal_gaussian wrote:
       | I generally conflate fuzz and property based testing,
       | particularly when I nearly always mean property based testing.
       | This is lazy, and is lazily justified with "fuzzing is pbt with
       | very generic inputs / pbt is fuzzing with more intelligent
       | inputs".
       | 
       | Regardless, the hard part with pbt is generating the inputs so
       | they cover real cases and dont accidentally systematically miss
       | some (this is often achieved by choosing different testing
       | boundaries). The hard part with fuzzing is knowing you've done
       | enough that you've hit many real data flows.
        
         | [deleted]
        
         | carlmr wrote:
         | There's a really good way to explain it
        
       | amw-zero wrote:
       | I think this is actually the better definition of property based
       | testing, written by the same author:
       | https://increment.com/testing/in-praise-of-property-based-te...
       | 
       | My take: don't shy away from the logic perspective of what
       | properties are, because it's the simplest explanation. A property
       | is a logical proposition, e.g. "p implies q". Property based
       | testing randomizes values of p and q as a proxy for all possible
       | values, since exhaustive testing is practically impossible in
       | almost all cases. The hypothesis is that running these tests for
       | long enough will eventually expose a bug, since more and more of
       | the input space gets generated over time.
       | 
       | PBT is an addicting tool. I used it on a personal project
       | involving a lot of date filtering, and it found so many edge
       | cases related to date parsing and timezone handling. In my
       | experience, you just also write way less test code since the test
       | library is generating the test cases for you.
       | 
       | The two major downsides are the difficulty in expressing useful
       | properties, and test runtime. Regarding expressing properties, it
       | is difficult, but I wonder if it can't be improved with practice.
       | Hillel Wayne wrote a bit about properties here:
       | https://www.hillelwayne.com/contract-examples/
       | 
       | Regarding the test runtime - it's just something to be wary of.
       | You can easily start running tens of thousands of test cases
       | depending on how you configure the library. It makes things like
       | testing against a real database really impractical, unless you
       | really control which inputs get generated, which defeats the
       | purpose of random data generation.
       | 
       | I think parallelization and non-blocking test suites are the
       | answer here. Locally and in CI, they can be run with a reasonable
       | N so that obvious errors hopefully get caught, but large N's get
       | saved for an async test suite that can benefit from high
       | parallelization.
       | 
       | My last point about them is, even after all their benefit,
       | example-based tests are still easier to write and serve their
       | purpose. They also allow you to test specific edge cases
       | explicitly.
       | 
       | Like everything else, PBT is just another tool in the shed.
        
       ___________________________________________________________________
       (page generated 2021-09-19 23:00 UTC)