[HN Gopher] Making Software Reliable: The Importance of Testability
___________________________________________________________________
Making Software Reliable: The Importance of Testability
Author : misoukrane
Score : 28 points
Date : 2023-12-20 16:34 UTC (6 hours ago)
(HTM) web link (www.codereliant.io)
(TXT) w3m dump (www.codereliant.io)
| gjsman-1000 wrote:
| I'm actually rather negative on automated testing.
|
| Every project, that has gotten rid of the Q/A team to rely on
| automated tests, hasn't exactly gone well. Windows being the
| obvious example, but there are others. I'm not seeing the quality
| improvement that should be there. And even when there are tests,
| elementary mistakes (like the time Windows 10 would delete your
| Documents folder) slip through the pipeline and screw everything
| up. How is it that some of the most reliable software in the
| world (like the moon landing suite, the IRS tax system, or stock
| trading mainframes, or the Windows NT kernel) were written before
| automated tests; yet some of the buggiest software in the world
| is the most well-tested (like Windows 10, or Google Drive, or
| just about every SPA)?
|
| In a team setting, they certainly seem to catch bugs. But is that
| because the tests have caught so many bugs that would've slipped
| into production; or is it that the team can afford to be careless
| now and writes sloppier code to begin with? I'm increasingly
| suspecting it's the latter, and while the tests catch most of the
| sloppiness, sometimes the slop passes...
| natbennett wrote:
| I've worked on a large project that got more reliable after
| they removed the QA team. Removing the QA team was incidental
| to the quality improvement though -- the QA team got removed
| because basically it was just producing a nightly report of
| problems that didn't matter.
|
| What actually improved quality on that project was, basically,
| architecture improvements. A major subsystem was rewritten in a
| way that made large classes of important bugs impossible. That
| rewrite _was_ substantially supported by automated tests,
| though -- a rather advanced simulation system IIRC.
|
| I do agree that "automated tests" aren't especially good at
| finding bugs. For that there's really no replacement for human
| beings who care looking carefully at the system's behavior,
| whatever title those people have. They're mostly useful when
| they make it safer to make changes to the code's design.
| diekhans wrote:
| My view is that automated testing is not a substitute for QA,
| but an additional tool. It lets QA focus on harder to automated
| tasks.
|
| For unit tests, a developer is going to try something out
| anyway, so capturing it in a unit test for the future should be
| just a little extra work. Also, writing a unit test means the
| developer has minimally used what they are writing.
|
| Higher-level (system) testing, especially with GUIs, can be
| more work than the value added. It is a cost trade-off, but
| ultimately, a human adds value not matter how much automated.
|
| AI will help too, but these are also tools. Drop QA and you are
| trading off costs for quality.
|
| IMHO, not backed up by research.
| gjsman-1000 wrote:
| There is one programming language that fascinated me (maybe
| it was Ada) where it tried to have some basic tests inline
| with the code, by defining basic guidelines for legitimate
| results of the function.
|
| For example, you could make a function called
| `addLaunchThrusterAndBlastRadius` (I know it make no sense,
| but bear with me), and then right alongside declaring it was
| an integer, you could put a limit saying that all results
| that this function can return must be greater than 5, less
| than 100, and not between 25 and 45. You could also do it
| when declaring variables - say, `blastRadius` may never be
| greater than 100 or less than 10, ever, without an exception.
|
| I wish we could go further that direction. That's pretty
| cool. Sure, you can get that manually by throwing exceptions,
| but it was just so _elegant_ that there was just no reason
| not to do it for every function possible.
| ChrisMarshallNY wrote:
| You can do that in Swift (and, I suspect, lots of
| languages).
|
| Swift has a fairly decent assertion/precondition facility,
| as well as reflection[0]. They also have a decent test
| framework[1] (and I know that some folks have extended it,
| to do some cool stuff).
|
| Some of these add significant overhead, so they aren't
| practical for shipping runtime, but they can be quite
| useful for debug-mode validation.
|
| Assertions are a very old technique. I think I first
| encountered them in _Writing Solid Code_ , in the 1990s.
| Back then, we had to sort of "roll our own," but they have
| since, become integrated into languages.
|
| Of course, all the tools in the world, are worthless, if we
| don't use them.
|
| [0]
| https://developer.apple.com/documentation/swift/debugging-
| an...
|
| [1] https://developer.apple.com/documentation/xctest/
| gjsman-1000 wrote:
| > Of course, all the tools in the world, are worthless,
| if we don't use them.
|
| True...
|
| I wonder if there would be any way, to simplify the
| syntax, and require basic assertions to be on every
| function. There might be a super easy cop-out like `any`,
| but at least by being forced to type it, you become aware
| of what it means and that it exists.
|
| Almost like:
|
| `public any int addXandY (any int x, any int y) {`
|
| I also wonder, if there could be such a thing as an
| `assertion exception` (or whatever it would be called).
| Maybe it would just make things a mess, but I'm just
| thinking out loud. Basically, you could have a function
| that behaves a specific way 90% of the time, but for that
| 10% of the time where it doesn't work, you could pass
| that assertion exception to override. Maybe that would
| just be awful... or it would keep functions much cleaner?
|
| Maybe you wouldn't even call it an exception. You'd just
| have multiple sets of assertions that could be applied to
| each function call.
| gjsman-1000 wrote:
| I just had another thought. What if you could have a bank
| of assertions? Like this pseudocode:
|
| ```
|
| assertion acceptableBlastNumber (int x) { x < 25; x > 5;
| }
|
| assertion acceptableBlastRadius (int x) { x > 500; x <
| 1000; ! (x > 750 && x < 800) }
|
| assertion acceptableBlastAddedNumber (int x) { x < 1025;
| x > 505; }
|
| public acceptableBlastAddedNumber int addBlastNumbers
| (acceptableBlastNumber int x, acceptableBlastRadius int
| y) { return x + y; }
|
| addBlastNumbers (10, 720) => 730
|
| addBlastNumbers (26, 750) => Exception
|
| ```
|
| Though I suppose that this is getting really close to
| just... classes. It would just be a little more...
| inline? Less complicated because it would never hold
| state? Though I suppose, this would also mean your class
| can just focus on being an objec, and not on having all
| the definitions for the things inside it, because you can
| have an <assertion> <object> rather than just <object>.
| jandrewrogers wrote:
| Modern C++ supports this pretty extensively via the type
| system. You can define/construct integer types with almost
| arbitrary constraints and properties that otherwise look
| like normal integers, for example. The template / generics
| / metaprogramming / type inference facilities in C++ make
| it trivial. Some categories of unsafe type interactions can
| be detected at compile-time with minimal effort, it isn't
| just runtime asserts.
|
| This is common in C++ for reliable systems. You
| infrequently see a naked 'int' or similar (usually at OS
| interfaces), almost all of the primitive types are
| constrained to the context. It is a very useful type of
| safety. You can go pretty far with a surprisingly small
| library of type templates if the constraint specification
| parameters are flexible.
|
| (This is also a good exercise to learn elementary C++
| template metaprogramming. A decent constrained integer
| implementation doesn't require understanding deep arcana,
| unlike some other template metaprogramming wizardry.)
| nescioquid wrote:
| Preconditions and postconditions around procedures. I
| thought that it was an innovation from Eiffel, though
| wikipedia lists Ada as an influence, so maybe it did
| originate there!
| sodapopcan wrote:
| Absolute one is not a replacement for the other.
|
| I worked at a place that did pretty strict TDD and had a
| dedicated QA person embedded on each team. Our high-level
| systems tests severed more as a smoke tests and only ever
| tested the happy paths. Our integration and units tests of
| course covered a lot more, but QA was essential in covering
| corner cases we never thought about as developers.
| fatnoah wrote:
| > My view is that automated testing is not a substitute for
| QA, but an additional tool. It lets QA focus on harder to
| automated tasks.
|
| This is my take as well. Automation is great for a lot of the
| repetitive work, but humans are better at creatively breaking
| stuff, handling UX testing, and improving and enhancing the
| automation itself.
| ChrisMarshallNY wrote:
| I tend to prefer test harnesses, over unit tests[0], but each
| definitely has its place.
|
| Testing is good. Integration testing is _very_ good. Someone
| posted a story, a few days back, that linked to a GIF of this
| video[1] (I want to go to the source -Facebook, unfortunately),
| with the caption: "When the unit tests pass, but the
| integration test does not."
|
| [0] https://littlegreenviper.com/miscellany/testing-harness-
| vs-u...
|
| [1]
| https://www.facebook.com/100001967368624/posts/2559941464081...
| mgbmgb wrote:
| I wouldn't dismiss the value of automated testing altogether, I
| think it really depends on the domain. I haven't seen a
| dedicated QA team in any of my recent companies and all the
| projects do just fine.
|
| With that said, if I were to develop something high-profile I'd
| use a combination of both (and usually funding is not an issue
| for this types of projects)
| sethammons wrote:
| can't access the article, but I'm big on testability. We used to
| have to watch each change like a hawk via monitoring and manual
| checks. We automated with SLO monitoring for prod and heavy use
| of gated builds protected behind integration and acceptance
| tests.
|
| My favorite system designed and used so far:
|
| PR -> gated integration build with docker-compose -> merge master
| -> async master/main re-run same test to prevent merge regression
| -> promote to staging -> semi-optional user test suite -> rolling
| deploy that will revert if error thresholds trip. This could
| deploy out to thousands of nodes in like 10 minutes from merge,
| though sometimes it would be closer to 20.
|
| The whole team had huge confidence that anything deployed would
| not immediately explode. Unlocked a lot of velocity. Each team
| could do multiple deploys per day.
| mgbmgb wrote:
| This is the way to go. With the recent push towards "more
| nimble teams" and high output / high velocity there is no place
| for QA teams to take days to review the releases, so you'd have
| to bake testability in.
|
| Recent example how fairly small team in Meta built the Threads
| app https://engineering.fb.com/2023/09/07/culture/threads-
| inside...
| jjice wrote:
| Using interfaces (whether literal interfaces in your language or
| the general concept) to isolate operations that rely on
| dependencies is so critical IMO. It makes testing so much easier,
| takes little additional upfront effort, and generally makes nicer
| to read code (subjectively).
|
| For example, if I have my `MailSender` interface with an
| `SMTPMailSender` implementation but then I want to switch over to
| using the AWS SDK for interacting with SES for auth purposes, I
| can just implement my interface as always and create my new
| `SESMailSender` and plop it in place and all my code just works.
| This isn't just for testability, but also general modularity.
|
| It takes so little additional upfront effort as well. Even just
| declaring an interface helps you isolate what functionality
| should actually be publicly exposed. I really don't think this
| creates any serious development drag. On top, it makes your tests
| so much easier to write, and tests are usually a good bang for
| your buck for the major cases.
|
| If you truly only need a single implementation and not even the
| ability to have one for testing, then that's fine! Keeping in
| mind that adapting this to an interface in the future should be
| simple and do that when it comes up.
| gbacon wrote:
| Combine that with the abstract test pattern, and teaching your
| harness how to instantiate an instance of a new concrete
| implementation is often all that's necessary.
| RcouF1uZ4gsC wrote:
| One concern I have with designing for testability (which often
| means, designing for ease of unit testing) is that everything
| becomes a abstract interface.
|
| Whereas before you might have void Foo(){
| Bar(); } void Bar(){ Baz(); }
| void Baz(){ }
|
| Where Bar and Baz only really have a single implementation
|
| This is fairly easy to grok and debug.
|
| You end up with void Foo(BarInterface* bar,
| BazInterface* baz){ bar->Bar(baz); }
|
| Now you have a lot more moving parts. If you have a problem with
| Foo, now you need to try to find out who all implements the
| interfaces and which implementation is being passed to.
|
| Of course, after a while people get tired of passing in these
| interfaces and come up with some kind of "automated" dependency
| injection framework, which then triples the complexity.
| mrkeen wrote:
| > If you have a problem with Foo
|
| If you have a problem with Foo, you instantiate a Foo inside
| FooTest, and then feed it problematic input.
|
| You can't instantiate a Foo inside FooTest unless you can
| instantiate a Foo and a Bar. Foo could be a clock whose time-
| of-day you can't control, and Bar could be a database that you
| can't instantiate.
| sigmonsays wrote:
| This post is for subscribers only
___________________________________________________________________
(page generated 2023-12-20 23:01 UTC)