[HN Gopher] Asking developers to do QA is broken - why anyone sh...
___________________________________________________________________
Asking developers to do QA is broken - why anyone should own QA
Author : ukd1
Score : 49 points
Date : 2021-06-30 21:45 UTC (1 hours ago)
(HTM) web link (www.rainforestqa.com)
(TXT) w3m dump (www.rainforestqa.com)
| kazinator wrote:
| The right person for doing QA for code written by developer A is
| developer B, who is motivated to show that there is breakage in
| the code written by A. This is just good old peer review, in the
| sphere of development.
|
| The industry uses dedicated QA people based on the assumption
| that you can get two for the price of a developer.
|
| In fact, if you have two developers, one of whom is more clever
| than the other, you want to give the code cranking to the less
| clever one, and use the clever one to verify the code cranking
| and make improvement suggestions.
|
| If you have clever people designing the system, and clever people
| finding problems in the commits, you can just have average coders
| cranking out the bulk of it. Most code doesn't need cleverness,
| just persistence and consistency.
| regularfry wrote:
| Devs doing QA worked fine for us. You need to have a team that
| actually cares about the product, and you need figurehead devs -
| not necessarily seniors or team leads, but charismatic people
| others will fall into line with - who model the behaviours you
| want.
|
| The problem with almost anything else is that it increases the
| number of hand-offs between groups of differently-aligned people
| on the route to production. If you're aiming at multiple
| deployments per day, with sub-hour commit-to-prod times, the
| latency in those hand-offs adds up way too fast.
| failwhaleshark wrote:
| Yep. Fire the aholes and the egos, and keep the messianic
| figures who go-give and get shit done.
| ping_pong wrote:
| I'm a developer who has worked over 25 years in software, and
| over 15 years in enterprise software, and 10 years at globally
| scalable SaaS companies. I vehemently disagree with the premise
| of the article. Of course, the authors feel this way because
| their product is a type of QA product. I get it. But as a
| developer, I think the best quality software I've seen is when
| the developer is responsible for development AND QA.
|
| In enterprise software during the late 1990s and 2000s,
| developers and QA were completely siloed, and it sucked.
| Developers would write code, do a modicum of testing and then
| throw it to QA. QA was usually understaffed and wouldn't get to
| QA'ing your features until late in the cycle, which means that
| just before the cutoff, you would get dozens of bug reports.
| Everyone was scrambling.
|
| The dance between developers and QA at the beginning of a project
| cycle was that PMs would come up with features, developers would
| come up with time estimates, and then QA would put time estimates
| on top of those. By the end of the cycle, about 75% of the
| features would get cut. QA was ALWAYS the long pole so it left
| the entire cycle long with many of the features cut, and still
| buggy.
|
| The simple fact is that QA is nothing but additional friction. I
| think QA is a dying profession to be perfectly straight.
| Splitting codeing from testing is just not something that makes
| sense anymore, especially as developers get paid more and more.
|
| Once I moved away from enterprise software, and moved to SaaS
| companies, it really made a difference once we eliminated QA from
| the equation. I think the model of developers owning the entire
| development + automated testing is the only way to go. Developers
| have a vested interest in ensuring that their code is good. It's
| vital for the engineering management to make sure that automated
| testing is a critical piece of the development process, not an
| afterthought like the article above suggests.
|
| Yes, historically when QA and development are split, developers
| thought that they should only do the bare minimum and the
| "distasteful" part of testing should be on the QA. But not
| anymore. If you're a developer with any pride, you would not want
| to check in poor quality code and poor quality tests, and testing
| should be enforced in every check-in, ie. no code checkins
| without testing of that code in the same check-in. And code
| reviews need to go through the test code as well with a fine-
| toothed comb. If the managers don't get the culture right, you
| will definitely have low-quality tests written by developers.
|
| And the idea that product managers need to own QA is a dismal
| prospect. It doesn't make sense for them to own QA. By separating
| testing from development you're just going to get more of the
| "throw the code over" mentality, and you won't get the proper
| testing at the time of check-in the way you will when developers
| do it all at once. And PMs will just wait to test their code
| until the end of the cycle, and create a dozen bug reports all at
| once, and then developers will rush to fix them, etc. We've
| already been through that with enterprise software. Don't repeat
| the same mistake from 20 years ago.
|
| Keep the onus of thorough testing on the developers, and have a
| healthy strong culture of thorough testing.
| MattGaiser wrote:
| Why can't the QA people just live on the dev team? Why do they
| need to be siloed away or not exist?
|
| I had this in my past job. Have 1 QA per two developers, the QA
| sits with those two developers, and you are constantly telling
| them when a build is done and on staging. They write tests, do
| some manual checks, and then tell you how it is going relatively
| immediately. They also handle the problem of reproducing bugs and
| narrowing down exactly where they occur, which is not trivial.
|
| For all the faults in that organization (including letting our
| test runner be broken for months), we didn't put out a lot of
| bugs and we found all manner of strange edge cases very quickly.
| innagadadavida wrote:
| > Why can't the QA people just live on the dev team?
|
| Because there is little incentive for devs to police themselves
| and there could be multiple dev teams spanning client/server
| that needs to be integrated and tested.
|
| A slightly better org to own QA would be product team.
| GaveUp wrote:
| The key part in that is communication. That's makes all the
| difference whether it be with BA's, QA's, or the end user.
| Speeds up the development cycle and greatly reduces the number
| of bugs.
|
| The best experience I had was on a team that had essentially 2
| BA's and 7 devs. There was constant communication to clarify
| actual requirements, devs would build automated tests off them,
| BA's would test against the requirements and then a business
| user would do a final look over. All in all features were able
| to be released usually within a day and there would be days
| we'd get out 3 or 4 releases. Only in one case did a major bug
| get released to production and the root cause of that was
| poorly worded regulations which had a clarifying errata come
| out at the 11th hour.
|
| For as many faults as that company had that caused me to move
| on I've yet to run across a team that functioned as well as
| that one did.
| failwhaleshark wrote:
| Communication is great until someone becomes unreasonable and
| doesn't want to do something. Trust but the chain-of-command
| must verify. Shouldn't need to, but it should be there as
| insurance.
| failwhaleshark wrote:
| Terrible idea.
|
| The dev team manager is focused on growing new features in the
| next release.
|
| QA is focused on the excellent of the current release.
|
| Subordinating quality to new releases is the result.
|
| In a similar situation, devops supposedly addressed similar
| tensions of dev and ops.
| mobilene wrote:
| I've yet to see the automated test suite that replaces a skilled,
| sapient, human, functional tester. The automation takes away the
| drudgery of repeating tests, but it takes a skilled human to
| figure out what risks are in the code and figure out coverage to
| determine whether those risks are realized. If you have
| developers write good unit and integration tests, and build their
| work to their local environment to make sure it basically works,
| you avoid most of the "throw it over the wall" mentality. You
| also deliver way fewer needles in the haystack, if you will.
| Testers are free to look for more impactful bugs manually, and
| then to automate coverage as appropriate.
| franciscassel wrote:
| Totally agree that automation will never fully replace the
| value of human-powered testing. (Though it is great for the
| rote regression-testing stuff. The "drudgery", as you put it.)
|
| Isn't the problem with relying too much on unit and integration
| tests that they don't consider the end-to-end app experience?
| (Which, in my mind, is what ultimately matters when customers
| think of "quality".)
| ukd1 wrote:
| IMHO, yep - it's a balance, but the great thing is they can
| be quick to run, and run locally-easily; which is great for
| developers to get fast-feedback. Unit testing is unlikely to
| catch all end-user level issues though; traditional
| automation too, which is why human-testing is still valuable
| today.
| ukd1 wrote:
| I think you're pretty on point here; today human nuance is
| needed to decide WHAT to test as well as HOW MUCH to test. It's
| also useful for executing some types of tests too (which is why
| we support both automation, and human testing). IMHO Unit and
| integration tests should be a given. Human testers should be
| used with the highest leverage where possible. Being a
| functional-tester though, today is partly limited by what
| tooling is accessible to you - which we want to fix.
| arduinomancer wrote:
| * Adopt this automation platform
|
| * It eventually becomes too much work for the PM as the product
| grows or the PM just gets tired of the tediousness of creating
| tests for all the edge cases
|
| * PM hires someone to help with creating the automation tests
|
| * That guy now becomes "the QA guy"
|
| * Repeat till you've re-invented "the QA team"
| politelemon wrote:
| We've suffered from the same problems as highlighted here. What
| helped us was a low code solution, https://smashtest.io, which is
| basically an English wrapper over Selenium. The developers spend
| less time on tests, and the QAs aren't an afterthought.
| eweise wrote:
| We have a team that helps with testing infrastructure but the
| tests are created by developers on the functional team.
| Everything is automated. Seems to work well.
| marielvargas wrote:
| With more folks building using low code and no-code tools many
| different types of users will need to know how to do QA
| ukd1 wrote:
| This; especially now more apps are being done with no-code
| tooling (bubble, anyone?) - not a lot of existing tooling work
| with it, and code-based testing isn't viable even if it did.
| protomyth wrote:
| Dev sometimes aren't the best QA because they think like
| developers and not like end users. The mindset prevents you from
| doing things on the corners that end users do. Its like your
| instincts kick and a keep you safe without the railing where an
| end user might plow ahead thinking their path is ok and fall off
| the edge.
|
| Devs should do automated unit and functional tests, but after
| that, get some good QA that do not have the same boss (at least
| at the first level) as the developers.
| g051051 wrote:
| The last place I worked would pull devs randomly to do QA tasks
| when QA got behind. Of course, since we weren't trained in the QA
| processes, and didn't do it all the time, it often wound up
| taking longer to do the testing, plus the devs would get behind
| in their tasks.
| ukd1 wrote:
| It's also common practice; usually caused by testing getting
| under-resourced, bad tooling, or longer shipping cycles (more
| stuff to test, more pressure to get it out).
| lowbloodsugar wrote:
| The best outcomes are when people test what they are responsible
| for.
| dilatedmind wrote:
| I think this article misses two important points on siloed qa
| teams
|
| 1. Qa doesn't know what can be covered by unit or integration
| tests
|
| 2. Since they treat our code like a black box, they may create
| permutations of tests which which cover the same functionality
|
| Maybe this is part of the draw of having a qa team. Feature
| coverage rather than code coverage. The downside is this can
| create a huge number of expensive to run manual tests which may
| be hitting the same code paths in functionally identical ways.
|
| The tooling for automating manual tests of web apps is almost
| there: puppeteer, recording user inputs and network calls,
| replaying everything and diffing screenshots.
| bilalq wrote:
| In a model where developers adhere to a devops philosophy and are
| product owners, there's no need for a split. Developers should
| not be measured only quantity of releases, but also on metrics
| related to availability, customer impact, latency, operational
| costs, etc.
|
| I'm not opposed to a model where non-technical roles are
| empowered to define test cases and contribute to approval
| workflows of a release pipeline, but that doesn't absolve
| developers of the primary responsibility of being end-to-end
| owners.
|
| I know "devops" is an overloaded term that means different things
| to different people. To me, it's when developers are involved in
| product research, requirements gathering, user studies, design,
| architecture, implementation, testing, release management, and
| ongoing operations. They're supported by a dedicated product
| manager who leads the research/requirements gathering/user
| studies initiatives.
| ppeetteerr wrote:
| Love the idea, but have you met product owners?
| franciscassel wrote:
| Yeah, the good ones really care about the product experience
| and customer outcomes, so this makes a lot of sense to me.
|
| But what do you mean?
| MontagFTB wrote:
| Asking developers to own QA is broken because developers are
| naturally biased towards the happy path. If you want to build a
| bar, you need someone to order -1 beers[1].
|
| Handing off QA to an external team is broken because those people
| don't have the necessary experience with the product, nor can
| they quickly and easily engage with development to get to the
| heart of a problem (and a fix.)
|
| Having QA rely exclusively on automation brings quality down as
| the application's complexity goes up. Writing tests to cover
| every possible edge case before the product ships isn't feasible.
|
| The best solution I've seen in the decades I've been in software
| development has been to have QA and Dev as part of a team.
| Automation covers as much of the product as possible, and
| continually grows as issues are identified that can be covered by
| CI from here on out. QA become essential at the boundary of new
| features, and require a deep understanding of the product in
| order to know what knobs to turn that might break the application
| in ways the developer never thought about. Once those sharp edges
| are also automated, QA pushes the boundary forward.
|
| [1]: https://twitter.com/brenankeller/status/1068615953989087232
| desc wrote:
| Silver bullets. They don't exist.
|
| _Code review._ Read the results of someone thinking through a
| process. Spot more than they will, simply by throwing more eyes
| at it. Actually fairly effective: getting a senior dev to cast
| even a lazy eye over everything gives more opportunities to
| discuss Why It 's Done This Way and Why We Don't Do That and Why
| This Framework Sucks And How To Deal With It with specific
| concrete examples which the other dev is currently thinking
| about. But it's still easier to write the code yourself than
| review it, and things still get missed no matter how careful you
| try to be, so it's still just another layer.
|
| _Unit tests._ They cover the stuff we think to check and
| actually encountered in the past (ie. regressions). Great for
| testing abstractions, not so great for testing features, since
| the latter typically rely on vast amounts of apparently-unrelated
| code.
|
| _Integration tests._ Better for testing features than specific
| abstractions, and often the simplest ones will dredge up things
| when you update a library five years later and some subtle
| behaviour changed. Slow sanity checks fit here.
|
| _UI-first automation (inc. Selenium, etc)._ Code or no-code, it
| 's glitchy as hell for any codebase not originally designed to
| support it; tends to get thrown out because tests which cry wolf
| every other day are worse than useless. Careful application to a
| _few basics_ can smoke-test situations which otherwise pass
| internal application sanity checks, and systems built from the
| start to use it can benefit a lot.
|
| _Manual testing._ Boring, mechanical, but the test plans require
| less _active_ fiddling /maintenance because a link changed to a
| button or something. Best for exploratory find-new-edge-cases,
| but throwing a bunch of students at a huge test plan can
| sometimes deliver massive value for money/coffee/ramen. Humans
| can tell us when the instructions are 'slightly off' and carry on
| regardless, distinguishing the actual important breakage from a
| trivial 2px layout adjustment or a CSS classname change.
|
| So that's the linear view. Let's go meta, and _combine techniques
| for mutual reinforcement._
|
| _Code review_ benefits from _local relevance_ and is hampered by
| _action at a distance._ Write static analysers which enforce
| relevant semantics sharing a lexical scope, ie. if two things are
| supposed to happen together ensure that they happen in the same
| function (at the same level of abstraction). _Encourage relevant
| details to share not just a file diff, but a chunk._ Kill dynamic
| scoping with fire.
|
| _Unit and Integration tests_ can be generated. Given a set of
| functions or types, ensure that they all fit some specific
| pattern. _This is more powerful than leveraging the type system
| to enforce that pattern,_ because when one example needs to
| diverge you can just add a (commented) exception to the
| generative test instead of rearchitecting lots of code, ie. you
| can easily separate sharing behaviour from sharing code. _Write
| tests which cover code not yet written, and force exceptions to a
| rule to be explicitly listed._
|
| _UI testing_ is rather hard to amplify because you need to
| reliably control that UI in abstractable ways, and make it easy
| to combine those operations. I honestly have no idea how to do
| this in any sane way for any codebase not constructed to enable
| it. If you 're working on greenfield stuff, congratulations; some
| of us are working on stuff that's been ported forwards decade by
| decade... _Actual practical solutions welcome!_
|
| That's my best shot at a 2D (triangular?) view: automated tests
| can enforce rules which simplify code review, etc. The goal is
| always to force errors up the page: find them as early as
| possible as cheaply as possible and as reliably as possible.
|
| The machine can't check complex things without either missing
| stuff or crying wolf, but it can rigidly enforce simple rules
| which let humans spot the outliers more easily.
|
| And it is _amazing_ how reliable a system can become just by
| killing, mashing and burning all that low-hanging error-fruit.
| bob1029 wrote:
| The complexity of our application necessitates that our business
| owners do a lot of the final integration testing.
|
| It also requires that our product owners handle a large part of
| the implementation & QA efforts.
|
| For our application, a single piece of code might be reused 40
| different ways across 100 different customers based on
| configuration-time items.
|
| To ask a developer to figure out why something is broken would be
| a fool's errand for us. Only with the aid of trace files &
| configuration references are we able to perform RCA and clean up
| an issue from live usage. For us, 99% of the time it is a
| configuration thing (either on our end or some 3rd party system).
| If the code works for 99 clients and 39 other configurations, it
| might be your client+config that is wrong, not the code.
| ukd1 wrote:
| What do you use to plan it or otherwise manage the process? Is
| it all manual?
| bob1029 wrote:
| Essentially yes. We are trying to move to a hybrid model
| where we can send excel workbooks to the customer for
| completion, and then directly import these into the system.
| This would get us ~80% of the way there.
|
| One huge upside with configuration is that it can be copied
| really easily. If you have a model customer that is very
| similar to others, you can use it as a starting point and
| save 99% of the work effort.
| wefarrell wrote:
| This feels a lot like a reincarnation of cucumber/gherkin, except
| they've replaced business-facing text DSL with a no-code visual
| UI. The intention is the same - to have the customer own the
| tests.
|
| This looks like it has a shallower learning curve to get started,
| but I would imagine that after a certain point it winds up being
| less productive to use the UI than to write code and this is
| ultimately why visual coding hasn't taken off. At some point
| someone will need to become a power user and be their
| organization's resident expert on Rainforest, but at that point
| they're spending all of their time in Rainforest and they're no
| longer a product owner embedded in the business.
|
| At the end of the day the business owner should be writing
| specifications, the engineers should be using them to create
| tests and they each should be collaborating closely on both.
| thrower123 wrote:
| I've had a couple goes at teams trying to roll out cucumber
| tests, and I still don't understand quite what the point is.
|
| Nobody but developers could actually manage to write any tests,
| and it was harder than just using the normal tools, plus
| maintaining all the glue besides.
| ukd1 wrote:
| If you want something it's closer to, it's sikuli script.
| Visually looking at the page (or whatever, tbh), then
| manipulating it using the keyboard and mouse. It's basically
| done using a KVM, so much closer to what a user would be able
| to see and do than something like cucumber or gherkin.
|
| However, we also allow you to test using a crowd of humans,
| should folks need more nuanced feedback about things, or have
| much more complex things to ask.
|
| Disagree on who should be writing tests; I think that's the
| case today as tooling doesn't support anyone but engineers (QA
| or not) automation things, or manual tests.
| tpmx wrote:
| I think product owners should do (manual) QA. On their own, and
| when necessary for scalability reasons also via a small team
| under their direct control.
| ukd1 wrote:
| +1 - I think they should be able to for sure - with current
| tooling, this is basically impossible without manually doing
| it; which I've seen a lot of product folks do!
| nullspace wrote:
| Yeah, this is the only thing that practically worked, in a
| complex product I used to work on.
|
| - If you ask the devs to do QA, you'll get no bugs other than
| the ones they already caught during testing and deployment.
|
| - If you have a mostly independent QA team, they will find
| somewhat silly/trivial bugs like the login page not working in
| an extreme edge case scenario.
|
| - However, when you ask your Product team to own QA, you get
| the real good stuff - "Why does this feature not actually work
| well with this other feature when you combine the
| configuration" etc. It's great!
| travisjungroth wrote:
| On independent QA teams finding trivial bugs, I think this is
| a social problem. Specifically an alignment one. If a bug
| goes out, whose fault is it? Eng for making it or QA for not
| finding it? The answers are different at different orgs, and
| they have more to do with power dynamics than anything else.
|
| QA is pretty easy to fake (for a while). The last thing you
| want to do is quality check your QA team. So the further the
| incentives of the QA team are from the success of the
| product, the worse things get. It's a spectrum from the other
| cofounder doing QA to someone in another country charging by
| the hour.
|
| There are opposing forces to this. It's inherently difficult
| for people to check themselves. Also, QA is its own skill.
| It's one more thing to ask devs to be great at. Maybe it's
| worth having some specific experts around. It's kind of a
| specific mindset that not all devs have.
|
| This mindset issue is where I'm not entirely sold on what
| looks like a new strategy from Rainforest QA (where I used to
| work) of strongly targeting product teams. I haven't seen
| _any_ QA tool so good that it removes the complete need for
| skill like indoor heating means you don 't need to know
| anything about how to tend a fire. The best ones are more
| like how a modern gas range helps a chef. So I question how
| great the results will be if you have a PM or CSM doing it.
| Still, I wish them the best of luck.
| tpmx wrote:
| We had a very large QA silo in a previous company. ~150
| people all working under a former QA (non-developer) person
| turned into a "QA VP". It was a disaster in so many ways.
| Empire-building tendencies, large numbers of bad hires,
| etc.
|
| The solution was to break up this silo and getting these
| people into the product teams where more technical people
| could handle management and hiring.
| ukd1 wrote:
| Thanks for the luck Travis! Def something we've been edging
| around for a while. From what we've been seeing, a lot of
| PMs and no/low code folks already do this kind of thing
| manually, and don't have tooling for it. RF automation is
| now way better / easier to use than when you were with us
| (imho, of course).
| franciscassel wrote:
| When I was a PM, I was lucky to have a big, talented QA team,
| but I _still_ knew I 'd have to do a "smoke test" myself
| after every major feature release. I cared the most, and I
| knew the most about the intricacies of the product.
| tpmx wrote:
| I really recognize that feeling.
|
| Also: Bliss is when you as a product owner get to such a
| place that you trust the QA team that you've worked with so
| closely so much that you only have have to some basic tests
| a few hours _after_ the release.
| mschuster91 wrote:
| > However, when you ask your Product team to own QA, you get
| the real good stuff - "Why does this feature not actually
| work well with this other feature when you combine the
| configuration" etc. It's great!
|
| That would require a _skilled_ product management that is on
| equal footing with sales and has a veto right _before_ stuff
| gets sold to customers. All too often however, the only thing
| that matters is customer wishes for new features, with no one
| taking care that the tugs the customers are pulling on don 't
| rip the product apart.
| tpmx wrote:
| Having previously spent a decade in that role for a $1B
| company: I didn't get a _veto_ before stuff got sold to
| customers. I really, really wanted that in the beginning!
|
| In the end what I ended up doing was spending a lot of time
| packaging the product as well as educating the sales force.
| If you don't have well-packaged products, the sales force
| will sell anything, even if it doesn't exist.
| void_mint wrote:
| I think QA-specific people that are specifically not SDETs create
| bad habits on teams. I think asking developers and product
| owners/managers to "own" QA fixes the "just throw it over the
| wall and hope for the best" problem.
| splitstud wrote:
| But then you're settling for low quality testing. There's an
| excellent methodology a good test team uses on devs that throw
| it over the wall. They throw it back with the first bug report.
|
| That said, it's sometimes the right thing for the team when
| something gets thrown over raw. A good test team should be fine
| with that and play their part.
| void_mint wrote:
| > But then you're settling for low quality testing.
|
| Not necessarily though. I would also argue that manual
| testing is just an insurmountable timesink. You'll never have
| enough time because manual testing balloons to the time
| allotted.
|
| > There's an excellent methodology a good test team uses on
| devs that throw it over the wall. They throw it back with the
| first bug report.
|
| Sure, but continuing a cycle of "not my problem" helps no one
| and wastes time. Removing QA as a team/role/specialization,
| and instead making it a step in the process a dev goes
| through to ship software, you're fixing the broken feedback
| cycle QA teams create.
| franciscassel wrote:
| I've been on teams where the "siloed" QA model seemed to work
| pretty well -- we seemed to find a decent balance between test
| coverage and frequency of releases.
|
| But this was at a cash-rich startup that had lots of money to
| spend on making sure we had plenty of QA headcount. That seems to
| be the exception, rather than the rule. Lots of startups I talk
| to are quite constrained in terms of dedicated QA, so the
| argument for empowering product managers to own quality does make
| some sense to me.
___________________________________________________________________
(page generated 2021-06-30 23:01 UTC)