[HN Gopher] Maybe getting rid of your QA team was bad
       ___________________________________________________________________
        
       Maybe getting rid of your QA team was bad
        
       Author : nlavezzo
       Score  : 263 points
       Date   : 2023-12-14 19:15 UTC (3 hours ago)
        
 (HTM) web link (davidkcaudill.medium.com)
 (TXT) w3m dump (davidkcaudill.medium.com)
        
       | fatnoah wrote:
       | In making the case for building up a QA org at my current
       | startup, I repeat the mantra that QA is both a skillset and a
       | mindset. Automated tests can tell us a lot, but skilled QA
       | testers are amazing at edge cases to break things and providing
       | human feedback about what looks and feels good gor users.
        
       | robotnikman wrote:
       | Microsoft is probably the one case where this sticks out the
       | most, at least for me anyways. Noticeably more bugs in updates
       | since they dropped their QA team, in Windows as well as cloud
       | products.
        
         | EvanAnderson wrote:
         | The revenue keeps rolling in so, clearly, they made the right
         | business decision... >sigh<
         | 
         | I use a lot of MSFT software and services in the "day job". I
         | wish there was some kind of consequence to them for their
         | declining quality.
        
           | danesparza wrote:
           | Almost like you should be able to file a 'bug report' or
           | something. Maybe they should build a team to make sure their
           | code quality is up to snuff...
        
       | alanjay wrote:
       | But Microsoft fired their entire qa department, and their
       | software is rock solid. Right! Right?
        
       | mschuster91 wrote:
       | Another part is that there is barely any training for QA people.
       | Even your average CS course will only graze the top of the topic,
       | most usually some prof droning on about Java unit tests on some
       | really old version of Java and a testing framework just as old.
       | 
       | There are no "Software Quality Assurance" academic degrees,
       | there's barely any research into testing methodologies, there's
       | barely any commercial engagement in the space aside from test run
       | environments (aka, selling shovels to gold diggers), and let's
       | face truth, also in tooling. And everything _but_ software QA is
       | an even worse state, with  "training" usually consisting of a few
       | weeks of "learning on the job".
       | 
       | Basic stuff like "how does one even write a test plan", "how does
       | one keep track of bugs", "how to determine what needs to be
       | tested in what way (unit, integration, e2e)" is at best cargo-
       | culted in the organization, at worst everyone is left to reinvent
       | the wheel themselves, and you end up with 13 different "testing"
       | jobs in a manually clicked-together Jenkins server, for one
       | project.
       | 
       | > Defect Investigation: Reproduction, or "repro", is a critical
       | part of managing bugs. In order to expedite fixes, somebody has
       | to do the legwork to translate "I tried to buy a movie ticket and
       | it didn't work" into "character encoding issues broke the
       | purchase flow for a customer with a non-English character in
       | their name".
       | 
       | And this would normally not be the job of a QA person, that's 1st
       | level support's job, but outsourcing to Indian body shops or
       | outright AI chatbots is cheaper than hiring competent support
       | staff.
       | 
       | That also ties in to another aspect I found lacking in the
       | article: _users are expected to be your testers for free_ aka you
       | sell bananaware. No matter if it 's AAA games, computer OSes,
       | phones, even _cars_...
        
         | wmichelin wrote:
         | To play devil's advocate here, I did not _really_ get training
         | for my software engineering role. I got a little bit from 1-2
         | college courses, but the vast majority of my role I had to pick
         | up on the job or on my own.
         | 
         | I can tell you, I definitely didn't get training for QA tasks,
         | but here I am doing them anyways. It's just work that needs to
         | be done.
        
           | mschuster91 wrote:
           | > To play devil's advocate here, I did not _really_ get
           | training for my software engineering role
           | 
           | Yeah and that is my point. It would be way better for the
           | entire field of QA if there were at least a commonly agreed
           | base framework of concepts and ways to do and especially to
           | name things, if alone because the lack of standardization
           | wrecks one's ability to even get testers and makes onboarding
           | them to your "in house standards" a very costly endeavour.
        
         | Swizec wrote:
         | > There are no "Software Quality Assurance" academic degrees,
         | there's barely any research into testing methodologies,
         | 
         | There's a lot of this actually. Entire communities of people
         | working on software quality assurance. Practitioners in this
         | space call their field "resilience engineering".
         | 
         | The field likes to talk a lot about system design. Especially
         | in the intersection of humans and machines. Stuff like _" How
         | do you set up a system (the org) such that production bugs are
         | less likely to make it all the way to users"_
        
       | wmichelin wrote:
       | This might be my personal experience, but I've never encountered
       | a QA team that actually writes the tests for engineering.
       | 
       | I have only had QA teams that wrote "test plans" and executed
       | them manually, and in rarer cases, via automated browser / device
       | tests. I consider these types of tests to be valuable, but less
       | so than "unit tests" or "integration tests".
       | 
       | With this model, I have found that the engineering team ends up
       | being the QA team in practice, and then the actual QA team often
       | only finds bugs that aren't really bugs, just creating noise and
       | taking away more value than they provide.
       | 
       | I would love to learn about QA team models that work. Manual
       | tests are great, but they only go so far in my experience.
       | 
       | I'm not trying to knock on QA folks, I'm just sharing my
       | experience.
        
         | convolvatron wrote:
         | in the classic model, most QA orgs were a useless appendage.
         | partially by construction, but largely because QA gets squeezed
         | out when dev is late (when does that happen?). they aren't
         | folded in early, so they twiddle their thumbs doing 'test
         | infrastructure' and 'test plans', until they finally get a code
         | drop and a 48 hr schedule to sign off, which they are under
         | extreme pressure to do.
         | 
         | but every once and a while you ran across a QA organization
         | that actually had a deep understanding of the problem domain,
         | and actually helped drive development. right there alongside
         | dev the entire way. not only did they improve quality, but they
         | actually saved everyone time.
        
           | lambic wrote:
           | Not sure why this was downvoted, that second paragraph is
           | right on the money.
        
           | cratermoon wrote:
           | Saying "useless appendage" sounds to me like it's the QA team
           | that's the problem, when what you're really saying is that
           | it's the organization and process that pushed QA teams into
           | irrelevance. I agree with your assessment overall, and those
           | issues were one of the driving forces behind companies
           | dispensing with QA and putting it all on the developers.
        
         | righthand wrote:
         | This is because there is no formal way to run a QA org, people
         | get hired and are told to "figure it out". Then as other
         | posters said the other orgs ignore the QA org because they have
         | no understanding of the need. What you're describing is a
         | leadership problem, not a QA usefulness problem.
        
         | philk10 wrote:
         | the engineering team are usually great at writing tests that
         | test their code, a good QA can test alongside them to find
         | cases they've missed and issues that automated code tests can't
         | find. The QA person doesn't have to spend time checking that
         | the app basically works, they can be confident in that and
         | spend their time testing for other 'qualities' But yes, I've
         | known QA teams that will only find bugs that no one cares about
         | or are never likely to happen - often because they are not
         | trained on the product to be able to dig deep
        
           | e28eta wrote:
           | It seems so obvious to me that your typical engineer, who
           | spent hours / days / whatever working on a feature, is never
           | going to test the edge cases that they didn't conceive of
           | during implementation. And if they didn't think of it, I bet
           | they're not handling it correctly.
           | 
           | Sometimes that'll get caught in code review, if your reviewer
           | is thinking about the implementation.
           | 
           | I've worked in payroll and finance software. I don't like it
           | when users are the ones finding the bugs for us.
        
             | philk10 wrote:
             | I started off as a dev, wanted to change to being a
             | tester/QA but was told by the CEO that "the customers are
             | better at finding bugs than we are so just give the app a
             | quick look over and ship it out" - I left soon after that.
        
         | yabones wrote:
         | From what I've seen, the value in QA is product familiarity.
         | Good QA'ers know more about how the product _actually_ works
         | than anybody else. More than PM 's, more than sales, and more
         | than most dev teams. They have a holistic knowledge of the
         | entire user-facing system and can tell you exactly what to
         | expect when any button gets pushed. Bad QA'ers are indeed a
         | source of noise. But so are bad devs, sysadmins, T1/2 support,
         | etc.
        
           | wmichelin wrote:
           | Agreed! I did have some good experiences at my last job with
           | the QA team, but it was definitely a unique model. They were
           | really a "Customer Success" team, it was a mix of QA, sales,
           | and customer support.
           | 
           | These "Customer Support" reps, when functioning as QA, knew
           | the product better than product or eng, exactly how you're
           | describing. I did enjoy that model, but they also did not
           | write tests for us. They primarily executed manual test
           | plans, after deploys, in production. They did provide more
           | value than creating noise, but the engineering team still was
           | QA, at least from an automated test standpoint.
        
             | mbb70 wrote:
             | We had no dedicated QA, but would consistently poach
             | "Customer Success" team members for critical QA work for
             | the exact reasons your listed. Worked quite well for us.
             | 
             | Especially for complex products that are based on users
             | chaining many building blocks together to create something
             | useful, devs generally have no visibility into how users
             | work and how to test.
        
           | hasoleju wrote:
           | I completely agree with that. It really comes down to having
           | the right skills as a QA person. If you don't know how the
           | product is used and only click on some buttons, you will
           | never reach the states in the software that real users reach
           | and therefor you will also not be able to reproduce them.
        
           | cableshaft wrote:
           | > Good QA'ers know more about how the product actually works
           | than anybody else. More than PM's, more than sales, and more
           | than most dev teams.
           | 
           | Not disagreeing with this, but there's one thing they won't
           | always be aware of. They won't always know what code a dev
           | touched underneath the hood and what they might need to
           | recheck (short of a full regression test every single time)
           | to verify everything is still working.
           | 
           | I know that the component I adjusted for this feature might
           | have also affected the component over in spots X, Y, and Z,
           | because I looked at that code, and probably did a code search
           | or a 'find references' check at some point to see where else
           | it's getting called, and also I usually retest those other
           | places as well (not every dev does, though. I've met some
           | devs that think it's a waste of time and money for them to
           | test anything and that's entirely QA's job).
           | 
           | A good QA person might also intuit other places that might be
           | affected if it's a visible component that looks the same (but
           | either I haven't worked with too many good QA people or that
           | intuition is pretty rare, I'm guessing it's the latter
           | because I believe I have worked with people who were good at
           | QA). Because of that, I do my best to be proactive and go "oh
           | by the way this code might have affected these other places,
           | please include those in your tests".
        
             | hysan wrote:
             | > They won't always know what code a dev touched underneath
             | the hood and what they might need to recheck (short of a
             | full regression test every single time) to verify
             | everything is still working.
             | 
             | This is a good point, but there are some QA that do review
             | code (source: me - started career in QA and transitioned to
             | dev). When making a test plan, an important factor is risk
             | assessment. If QA has a hunch, or better when the dev lead
             | flags complex changes, the test plan should be created and
             | then the code diffs should be reviewed to assess whether or
             | not the plan needed revising. For example, maybe the QA env
             | doesn't have a full replica of prod but a query is
             | introduced that could be impacted if one of the joining
             | tables is huge (like in prod). So maybe we'd adjust the
             | plan to run some benchmarks on a similar scale environment.
             | 
             | I'm definitely biased since I started in QA and loved it.
             | To me, good QA is a cross section of many of the things
             | people have mentioned - technical, product, ops, security -
             | with a healthy dash of liking to break things. However,
             | reality is that the trend has been to split that
             | responsibility among people in each of those roles and get
             | rid of QA. Works great if people in each of those job
             | functions has the bandwidth to take on that QA work
             | (they'll all have a much deeper knowledge of their
             | respective domains). But you'll lose coverage if any one of
             | those people don't have time to dedicate to proper QA.
             | 
             | (I'll also completely acknowledge that it's rare to have a
             | few, let alone a full team, of QA people who can do that.)
        
             | giantrobot wrote:
             | > Not disagreeing with this, but there's one thing they
             | won't always be aware of. They won't always know what code
             | a dev touched underneath the hood and what they might need
             | to recheck (short of a full regression test every single
             | time) to verify everything is still working.
             | 
             | It doesn't necessarily matter what code was changed, a
             | change in code in Module A can cause a bug in Module B that
             | hasn't been changed in a year. A QA test plan should cover
             | the surface area of the product as used by consumers
             | whoever they might be. While knowing some module had fixes
             | can inform the test plan or focus areas when the test
             | schedule is constrained, only testing changes is the road
             | to tears.
        
               | cableshaft wrote:
               | Test plans never account for everything, at least in my
               | experience, especially edge cases. And it's rare that
               | I've seen any QA team do a full regression test of the
               | entire site. There's only been a few times where I've
               | seen it authorized, and that's usually after a major
               | refactoring or rewrite.
               | 
               | I'm not in QA, I write code, so I defer to whatever they
               | decide for these things usually, these are just
               | observations from what I've seen.
               | 
               | I just try to make sure I test my code enough that there
               | isn't anything terribly broken when I check it in and
               | fixes I need to make tend to be relatively minor (with a
               | few exceptions in my past).
               | 
               | Also I'm not necessarily talking basic functionality
               | here. I'm currently working for a client that's very
               | picky about the look and feel, so if a few pixels in
               | padding get adjusted where it's noticeable, or a font
               | color or size gets adjusted a bit, in one place and it
               | affects something else, there could be complaints. And a
               | test plan is not likely to catch that, at least not any
               | on any projects I've worked on.
        
             | asadotzler wrote:
             | >They won't always know what code a dev touched underneath
             | the hood and what they might need to recheck (short of a
             | full regression test every single time) to verify
             | everything is still working.
             | 
             | Not really. As QA I always reviewed the checkins since
             | yesterday before opening up the daily build. Between the
             | bug comments and the patch comments, even if the patch
             | itself is a bit Greek to me, I can tell what was going on
             | enough to be a better tester of that area.
        
           | bbarn wrote:
           | This is a great model, until those people so familiar with
           | the business needs end up.. doing business things instead.
           | It's really hard to keep people like that in a QA role once
           | the business recognizes their value. Kind of the same problem
           | with QA automation people - once they become really good at
           | test automation, they are effectively software developers,
           | and want to go there.
        
             | importantbrian wrote:
             | I think that's a compensation problem more than anything
             | else. I've known some QA folks who enjoyed QA and would
             | have stayed in that role if they could have justified the
             | massive differential in comp between QA and SWE or product
             | development. If we valued QA and compensated it at the same
             | level we do those other roles then there would be a lot
             | less difficulty retaining good QA folks.
        
             | taurath wrote:
             | I have never once heard of a problem that QA folks end up
             | in project or product management too often, and almost
             | always have the problem of not being able to escape the QA
             | org despite many years. Most companies are extremely
             | resistant to people moving tracks, especially from a "lower
             | status" org like QA or CS. It's the exception not the rule.
        
           | taylodl wrote:
           | To your point, the QA team is the customer's advocate. As you
           | say, they know the product, _from the customer 's
           | perspective_, better than anyone else in the development
           | organization.
           | 
           | Where I've seen QA teams most effective is providing more
           | function than "just" QA. I've seen them used for 2nd tier
           | support. I've seen them used to support sales engineers. I've
           | also seen QA teams that take their manual test plans and
           | automate their execution (think Selenium or UiPath) and have
           | seen those automations included in dev pipelines.
           | 
           | Finally, the QA team are the masters and caretakers of your
           | test environment(s), all the different types of accounts you
           | need for testing, they should have the knowledge of all the
           | different browsers and OSes your customers are using, and so
           | forth.
           | 
           | That's _a lot_ for the dev team to take on.
        
             | kgermino wrote:
             | That also means they test from a different perspective than
             | the dev does. If I get a requirement my build is based on
             | my understanding of the requirement, and so is my testing.
             | 
             | A separate QA person coming at it from the customer's
             | perspective will do a test that's much more likely to
             | reflect reality.
        
         | BiteCode_dev wrote:
         | Frameworks like playright can record as code user actions and
         | you can replay them in a test.
         | 
         | So you can make your QA teams create plenty of tests if you
         | give them the right tools.
        
           | chopin wrote:
           | In my experience such tests are brittle as hell.
        
             | Osmose wrote:
             | You're not wrong, but a good, well resourced QA org can
             | both help write or develop more flexible tests, and also
             | help fix brittle tests when they do break. The idea of
             | brittle tests that break often being a blocker is
             | predicated on practices like running every type of test on
             | every commit that exist to deal with a lack of QA effort in
             | the first place.
             | 
             | Maybe recorded integration tests are run on every release
             | instead of every commit? Maybe the QA team uses them less
             | to pass/fail work and more to easily note which parts of
             | the product have changed and need attention for the next
             | release? There's lots of possibilities.
        
         | shados wrote:
         | > but I've never encountered a QA team that actually writes the
         | tests for engineering.
         | 
         | I have a few times. But the only common thing in the QA
         | industry, is that every company does it differently and think
         | they're doing it the "normal way".
        
         | closeparen wrote:
         | In my company the engineering team mostly writes unit tests.
         | Then there was a weekly manual QA exercise where the oncall
         | engineer followed a checklist with our actual mobile app on an
         | actual phone before it went to the store. When this started to
         | take almost the entire day, we hired a contract workforce for
         | it. The contract workforce is in the process of automating
         | those tests, but the most important ones still get human eyes
         | on.
        
         | refulgentis wrote:
         | I was surprised by the opposite of this (after entering my
         | first real job at Google, after startup founder => seller.)
         | 
         | People wrote off QA completely unless it meant they didn't have
         | to write tests, but, it didn't track from my (rather naive)
         | perspective that tests are _always_ part of coding.
         | 
         | From that perspective, it seemed QA should A) manage go/nogo
         | and manual testing of releases B) keep the CI green and tasks
         | assigned for red (bonus points if they had capacity to try
         | fixing red) C) longer term infra investments, ex. what can we
         | do to migrate manual testing to integration testing, what can
         | we do to make integration testing not-finicky in the age of
         | mobile
         | 
         | I really enjoyed this article because it also indicates the
         | slippery slide I saw there: we had a product that had a _60%
         | success rate_ on setup. And the product was $200+ dollars to
         | buy. In retrospect, the TL was into status games, not technical
         | stuff, and when I made several breakthroughs that allowed us to
         | automate testing of setup, they pulled me aside to warn me that
         | I should avoid getting into it because people don't care.
         | 
         | It didn't compute to me back then, because leadership
         | _incessantly_ talked about this being a #1 or #2 problem in
         | quarterly team meetings.
         | 
         | But they were right. All that happened was my TL got mad
         | because I kept going with it, my skip manager smiled and got a
         | bottle of wine to celebrate with, I got shuffled off to work
         | with QA for next 18 months, and no one ever really mentioned it
         | again.
        
         | spookie wrote:
         | At least from my knowledge of the gaming world there are QA
         | devs who do find the issues and fix them if they've the ability
         | to do so, point out code that should be taken a look at, and
         | all of that. I find it extremely valuable to have another set
         | of eyes in the code with a very more focused perspective,
         | sometimes different from the dev.
        
         | yungporko wrote:
         | yeah my experience is basically the same, usually if a place
         | has qa at all, it's one person in a team who doesnt have an
         | adequate environment or data set to test with and they
         | effectively end up just watching the developer qa their own
         | work and i end up screaming into a pillow every time i see
         | "Tester: hi" pop up on my screen.
         | 
         | the one exception to this was when i was qa (never again) and i
         | made sure we only ever did automated tests. unfortunately
         | management was nonexistent, devs made zero effort to work with
         | us, and naturally we were soon replaced by a cheap offshore
         | indian team who couldn't tell you the difference between a
         | computer and a fridge anyway.
         | 
         | i think a lot of it just stems from companies not caring about
         | qa, not knowing who to hire, and not knowing what they want the
         | people they hire to achieve. "qa" is just like "agile", where
         | nobody can be bothered to actually learn anything about it, so
         | they make something up and then pat themselves on the back for
         | having it.
        
         | marcelr wrote:
         | Weird, I have had the opposite experience that most shit slips
         | through the cracks of automated testing & manual testing by an
         | experienced QA is 10x more effective.
        
         | tootie wrote:
         | I think the thing missing from a lot of these conversations is
         | what problem domain you're working in. The more "technical"
         | your problem domain is the more valuable automated testing will
         | be over manual. For almost anything based on user experience
         | and especially mass-market customer-facing products, human QA
         | is far more necessary.
         | 
         | In either case, the optimal operating model is that QA is
         | embedded in your product team. They participate in writing
         | tickets, in setting test criteria and understanding the value
         | of the work being done. "Finding bugs" is a low value task that
         | anyone can do. Checking for product correctness requires a lot
         | more insight and intuition. Automated test writing can really
         | go either direction, but typically I'd expect engineers to
         | write unit tests and QA to write e2e tests and only as much or
         | as little as it actually saves time and can satisfactorily
         | indicated success or failure of a user journey.
        
         | sghiassy wrote:
         | I share the same experience that the QA team writes test plans
         | not and "code level" tests
         | 
         | That said, those test plans are gold. They form the definition
         | of the product's behavior better than any Google Doc,
         | Integration Test, or rotating PM ever could.
        
         | IKantRead wrote:
         | A good QA person is to a software developer as a good editor is
         | to a writer. Both take a look at your hard work and critique it
         | ruthlessly. Annoying as hell when it's happening, but in my
         | experience well worth it because the end result is much higher
         | quality.
         | 
         | I might just be too old, but I remember when QA people didn't
         | typically write tests, they manually tested your code and did
         | all those weird things you were really hoping users wouldn't
         | do. They found issues and bugs that would be hard to
         | universally catch with tests.
         | 
         | Now we hoist QA on the user.
         | 
         | Working with younger devs I find that the very concept of QA is
         | something that is increasingly foreign to them. It's astounding
         | how often I've seen bugs get to prod and ask "how did it work
         | when you play around with it locally?" only to get strange
         | looks: it passed the type checker, why not ship it?
         | 
         | Programmer efficiency these days is measured in PRs/minute, so
         | introducing bugs is not only not a problem, but great because
         | it means you have another PR you can push in a few days once
         | someone else notices it in prod! QA would have ruined this.
        
           | bumby wrote:
           | > _Now we hoist QA on the user._
           | 
           | This drives me crazy. It's a cheap way of saying we're ok
           | shipping crap. In the past, I've been part of some QA audits
           | where the developers claimed their customer support log
           | sufficed as their test plan. This wasn't safety-critical
           | software, but it did involve what I would consider medium
           | risk (e.g., regulatory compliance). The fact that they openly
           | admit they are okay shipping bad products in that environment
           | just doesn't make sense to me.
        
         | JohnFen wrote:
         | The type of testing QA should be doing is different from the
         | type of testing that devs should be doing. One doesn't
         | substitute for the other.
        
           | trealira wrote:
           | I remember Steve Maguire saying this in _Writing Solid Code_
           | (that they 're both necessary, and both types of testing
           | complement the other). He criticized Microsoft employees who
           | relied on QA to find their bugs. He compared QA testing to a
           | psychologist sitting down with a person and judging whether
           | the person is insane after a conversation. The programmer can
           | test from the inside out, whereas QA has to treat the program
           | like a black box, with outputs and effects resulting from
           | certain inputs.
        
         | wombat-man wrote:
         | Microsoft used to test this way, at least in the team I worked
         | with. SDEs still wrote unit tests. But SDETs wrote a lot of
         | automated tests and whatever random test tools that ended up
         | being needed. The idea was to free up more SDE time to focus on
         | the actual product.
         | 
         | I think that era is over after the great SDET layoffs of
         | 2014/2015? Now I guess some SDE teams are tasked with this kind
         | of dev work.
        
         | senderista wrote:
         | Anecdotally, from my time on Windows Vista I remember an old-
         | school tester who didn't write any code, just clicked on stuff.
         | From what I could tell, in terms of finding serious bugs he was
         | probably more valuable than any of the SDETs who did write
         | code. His ability to find UI bugs was just amazing (partly due
         | to familiarizing himself with the feature specs, I think, and
         | partly due to some mysterious natural ability).
        
         | macksd wrote:
         | I've seen good QA teams who own and develop common
         | infrastructure, and can pursue testing initiatives that just
         | don't fit with engineering teams. When developing a new
         | feature, the team developing it will write new tests to cover
         | the functionality of the feature, and will own any failures in
         | those tests moving forward. But while they're doing that, the
         | QA team is operating a database of test failures, performance
         | metrics, etc. that can provide insight into trends, hot spots
         | needing more attention, etc. They're improving the test
         | harnesses and test frameworks so it's easier for the
         | engineering teams to develop new tests quickly and robustly.
         | While the engineering team probably owns all of the unit tests
         | and some integration tests - a dedicated QA team focuses on
         | end-to-end tests, and tests that more accurately recreate real
         | world scenarios. Sometimes there are features that are hard to
         | test well because of non-deterministic behavior, lots of
         | externalities, etc., and I think QA should be seen as an
         | engineering specialty - sometimes they should collaborate with
         | the feature teams to help them do that part of their job better
         | and teach them new testing techniques that may be appropriate
         | that perhaps aren't obvious or common.
         | 
         | I would also second another comment that pointed out that good
         | QA folks often know the real surface area of the product better
         | than anyone. And good QA folks also need to be seen as good QA
         | folks. If you have a corporate culture that treats QA folks
         | like secondary or lesser engineers, that will quickly be a
         | self-fulfilling prophecy. The good ones will leave all the ones
         | who fit your stereotype behind by transitioning into dev roles
         | or finding a new team.
        
         | g051051 wrote:
         | The last two organizations I worked for had full QA teams with
         | people who wrote the tests, not just test plans. The devs
         | sometimes provided features to facilitate it, but the QA teams
         | were the ones that constructed the tests, ran them, and decided
         | if the software was ready to be released. Some things had
         | manual tests, but a large percentage was fully automated.
        
         | solardev wrote:
         | I've only ever had an official QA team in one job, at a Fortune
         | 1000. When I started we didn't have anyone yet, but eventually
         | they hired an mid-manager from India and brought him over (as
         | in relocated his whole family). He then brought on a QA person
         | he had worked with previously.
         | 
         | I did not work well with the mid-manager, who was both my new
         | boss and the QA person's (not too relevant here). However, I do
         | give him credit for the person he hired.
         | 
         | That QA person, a young Indian woman with some experience, was
         | actually _phenomenal_ at her job, catching many mistakes of
         | ours both in the frontend and in the APIs.
         | 
         | She not only did a bunch of manual testing (and thus discovered
         | many user-facing edge cases the devs missed), she wrote all the
         | test cases (exhaustively documented them in Excel, etc. for the
         | higher-ups), AND the unit tests in Jest, AND all the end-to-end
         | tests with Playwright. It drastically improved our coverage and
         | added way more polish to our frontend than we otherwise
         | would've had.
         | 
         | Did she know everything? No, there was some stuff she wasn't
         | yet familiar with (namely DOM/CSS selectors and Xpath), and it
         | took some back-and-forth to figure out a system of test IDs
         | that worked well enough for everyone. She also wasn't super
         | fluent with the many nuances of Javascript (but really, who
         | is). There was also a bit of a language barrier (not bad, but
         | noticeable). Overall, though, I thought she was incredible at
         | her job, very bright, and ridiculously hard-working. I would
         | often stay a little late, but she would usually be there for
         | hours after the end of the day. She had to juggle both the
         | technical dev/test tasks, the cultural barriers, and managing
         | both up and across (as in producing useless test case reports
         | in Excel for the higher ups, even though she was also writing
         | the actual tests in code), dealing with complex inter-team
         | dynamics, etc.
         | 
         | I would work with her again any day, and if I were in
         | management, I'd have promoted the heck out of her, trained her
         | in whatever systems/languages she was interested in learning,
         | or at least given her a raise if she wanted to stay in QA. To
         | my knowledge the company didn't have a defined promotion system
         | though, so for as long as I was there, she remained QA :( I
         | think it was still better than the opportunities she would've
         | had in India, but man, she deserved so much more... if she had
         | the opportunities I did as an American man, she'd probably be a
         | CTO by now.
        
         | me_smith wrote:
         | Hello. I am QA that writes tests for engineering. Technically,
         | my title is a Software Development Engineer in Test (SDET). Not
         | only do I write "test plans", I work on the test framework,
         | infrastructure and the automation of those test plans.
         | 
         | Every company is different on how they implement the QA
         | function. Whether it be left to customer, developers, customer
         | support, manual only QA, or SDET. It really comes down to how
         | much leadership values quality or how leadership perceives QA.
         | 
         | If a company has a QA team, I think the most success comes when
         | QA get involved early in the process. If it is a good QA team,
         | they should be finding bugs before any code is written. The
         | later they are involved, the later you find bugs (whether the
         | bugs are just "noise" or not) and then the tighter they get
         | squeezed between "code complete" and release. I think that the
         | QA team should have automation skills so more time is spent on
         | new test cases instead of re-executing manual test cases.
         | 
         | Anyways, from my vantage point, the article really hits hard.
         | QA are sometimes treated as second class citizens and left out
         | of many discussions that can give them the context to actually
         | do their job well. And it gets worse as the good ones leave for
         | development or product management. So the downward spiral is
         | real.
        
         | rwmj wrote:
         | That's only your personal experience because our QE team at Red
         | Hat spend a very large amount of their time coding new tests or
         | automating existing ones. They use this framework:
         | https://avocado-vt.readthedocs.io/en/latest/Introduction.htm...
        
         | dylan604 wrote:
         | Unit tests are great when you provide data that the methods
         | expect and are sane. It's not until users get in front of the
         | UI and submit data that you never even thought about testing
         | with your unit tests.
         | 
         | To me, unit tests are great to ensure the code doesn't have
         | silly syntax errors and returns results as expected on the
         | happy path of coding. I would never consider that QA no matter
         | how much you randomize the unit test's input.
         | 
         | Humans pushing buttons, selecting items, hover their mouse over
         | an element, doing all sorts of things that have no real reason
         | but yet they are being done anyways will almost always wreck
         | your perfect little unit tests. Why do you think we have
         | session playback now, because no matter what a dev does to
         | recreate an issue, it's never the exact same thing the user
         | did. And there's always that one little WTF does that matter
         | type of thing the user did without even knowing they were doing
         | anything.
         | 
         | A good QA team are worth their weight in $someHighValueMineral.
         | I worked with one person that was just _special_ in his ability
         | to find bugs. He was savant like. He could catch things that
         | ultimately made me look better as the final released thing was
         | rock solid. Even after other QA team members gave a thumbs up,
         | he could still find something. There were days were I hated it,
         | but it was always a better product because of his efforts.
        
         | joe_the_user wrote:
         | The Q&A teams I've seen worked the way you describe initially
         | except they were valuable.
         | 
         | They weren't there _for_ engineering, they were there for
         | product quality. Their expertise was that they knew what the
         | product was supposed to do and made it did it. Things like
         | "unit tests" help development but they don't make sure the
         | product satisfies client requirements.
         | 
         | If engineering is really on top of it, they learn from QA and
         | QA _seems_ to have nothing to do. But don 't let that situation
         | fool you into thinking they are " _just creating noise and
         | taking away more value than they provide_ "
        
         | bumby wrote:
         | FWIW, I have seen that same model have some success, provided
         | management is willing to stand-up for QA. When QA isn't
         | actively writing tests, they can still provide some balance
         | against human biases that tend toward following the easiest
         | path. In these cases, QA provides an objective viewpoint and
         | backstop to cost and schedule pressures that might lead to bad
         | decisions. This might be most valuable on safety-critical code,
         | but I suppose it can still apply at various levels of risk.
         | 
         | I've seen where this has went poorly as QA was slowly eroded.
         | It became easier and easier to justify shoddy testing
         | practices. Low-probability events don't come around often by
         | their very nature and it can create complacency. I've seen some
         | aerospace applications have some close calls related to
         | shortcomings in QA integration; in those cases, luck saved the
         | day, not good development practices.
        
         | earth_walker wrote:
         | I work with the regulated drug development industry, and
         | believe there is a useful and important distinction between
         | Quality Control (QC) and Quality Assurance (QA). I wonder if
         | perhaps this distinction would be useful to software quality
         | too.
         | 
         | QC are the processes that ensure a quality product: things like
         | tests, monitoring, metrology, audit trails, etc. No one person
         | or team is responsible for these, rather they are processes
         | that exist throughout.
         | 
         | QA is a role that ensures these and other quality-related
         | processes are in place and operating correctly. An independent,
         | top level view if possible. They may do this through testing,
         | record reviews, regular inspections and audits, document and
         | procedure reviews, analyzing metrics.
         | 
         | Yes, they will probably test here and there to make sure
         | everything is in order, but this should be higher level -
         | testing against specifications, acceptability and regulatory,
         | perhaps some exploratory testing, etc.
         | 
         | Critically _they should not be the QC process itself_ : rather
         | they should be making sure the QC process is doing its job.
         | QA's value is not in catching that one rare bug (though they
         | might), but in long term quality, stability, and consistency.
        
         | acdha wrote:
         | I think that complicates conversations like this. I've seen a
         | range of QA people ranging from the utterly incompetent to
         | people who knew the product and users better than anyone else
         | to people writing code and tackling gnarly performance or
         | correctness issues.
         | 
         | If your company hires the low end of that scale, any approach
         | is going to have problems because your company has management
         | problems. It's very easy to take a lesson like "QA is an
         | outdated concept" because that's often easier than
         | acknowledging the broken social system.
        
       | cloths wrote:
       | I can't agree more.
       | 
       | > Focus: There is real value in having people at your company
       | whose focus is on the quality of your end product. Quality might
       | be "everybody's job"...but it should also be "somebody's job".
       | Yes indeed, naturally every person have just one focus, having
       | dedicated person focus on QA is important.
       | 
       | Another practice, or buzz word (or used to be buzz word:) ),
       | Exploratory Testing, which can pretty much be conducted only by
       | dedicated QA.
        
         | philk10 wrote:
         | That's pretty much my role - I don't write test cases, I'll
         | explore the system and try to find issues that the devs have
         | missed. Then they learn from what they missed so I have to
         | explore more to find other types of issues.
        
       | gyudin wrote:
       | Getting rid of QA teams are a slow ticking bomb frequently imho.
       | Cause some issues are might not be even breaking functionality.
       | You can mess up some tracking/analytics and managers will make
       | wrong decisions based on incorrect data. But personally I feel
       | like within few years everything might change a lot. Machines
       | 100% will be be better at coding, maintenance and testing things.
        
       | harshalizee wrote:
       | This is a symptom of a larger issue in tech where C/E suites are
       | trying really hard to turn engineers into some sort of fungible
       | cogs in the system that can be swapped in and out and in
       | different parts of the system and still have everything work
       | perfectly in order.
        
         | righthand wrote:
         | Yep I see this a lot where people want swappable engineers and
         | no one is able to understand if you have an engineer working in
         | the Frontend most of the time they will not be acquainted with
         | backend work. Nor is there a need or logical way to keep
         | everyone working across the stack to keep them in this
         | swappable state. Each time you change a persons job they need
         | retraining and orientation. Code is code but a drop down menu
         | is not a database insert.
        
       | righthand wrote:
       | QA Engineers are some of the best debuggers too. They have their
       | hands in the pipeline, src and test directories, and often work
       | with all aspects of developing and deploying the application.
       | 
       | When I was a QA lead I often ran into software engineers that
       | couldn't be bothered to read a pipeline error message (and would
       | complain daily in Slack) and when it came to optimizing the
       | pipeline they would ignore base problems and pretend the issues
       | that stemmed from the base problems were magical and not
       | understood. Wasting days guessing at a solution.
       | 
       | The disrespect a QA engineer sees is not exaggerated in this
       | article. Since most companies with QA orgs do not have a rigorous
       | interviewing process like the Engineering orgs, the QA engineers
       | are seen as lesser. The only SWE that have respect for them that
       | I've met are the people who worked in QA themselves. The
       | disrespect is so rampant that I myself have switched back to the
       | Engineering org (I tried using seniority as a principal engineer
       | and even shifted as a manager to make changes, but this failed
       | because Engineering could not see past their own hubris and
       | leadership peoples will not help you). My previous company before
       | I was laid off hired a new CTO who claimed we could just automate
       | away QA needs but had no examples of what she was talking about.
       | This is the level of respect poured down from the top about
       | building good software.
        
       | ptmcc wrote:
       | I stepping-stoned through QA on my way into development, now a
       | decade something ago, and this part stands out as especially true
       | in my experience:
       | 
       | > This created a self-reinforcing spiral, in which anyone "good
       | enough at coding" or fed up with being treated poorly would leave
       | QA. Similarly, others would assume anyone in QA wasn't "good
       | enough" to exit the discipline. No one recommends the field to
       | new grads. Eventually, the whole thing seemed like it wasn't
       | worth it any more. Our divorce with QA was a cold one --
       | companies just said "we're no longer going to have that function,
       | figure it out."
       | 
       | I've worked with a handful of talented career software QA people
       | in the past. The sanity they can bring to a complex system is
       | amazing, but it seems like a shrinking niche. The writing was on
       | the wall for me that I needed and wanted to get into fulltime dev
       | as QA got increasingly squeezed, mistreated, and misunderstood.
       | At so many companies QA roles went into a death spiral and
       | haven't recovered.
       | 
       | Now, as the author points out, a lot of younger engineers have
       | never worked with a QA team in their career. Or maybe have worked
       | with a crappy QA team because its been devalued so much. So many
       | people have never seen good QA that no one advocates for it.
        
         | reactordev wrote:
         | >anyone in QA wasn't "good enough"
         | 
         | This is why. Engineers have some of the most inflated egos that
         | they set an extremely high bar for being "part of the club".
         | Sometimes that's corporate policy (hire better than you) and
         | sometimes it's just toxicity (I am better than you). Without
         | realizing that the most valuable skills they could learn are
         | soft skills. I'm open to finding anyone willing to code.
         | Whether it's from QA, sales, Gerry's nephew, recent CS grad,
         | Designer turned coder, or that business analyst that taught
         | themselves Python/Pandas.
         | 
         | A good QA team is sorely missed. A bad QA team turns the whole
         | notion of QA teams sour. Just the same for development teams :D
         | 
         | I think devs are first line of defense. Unit tests etc. QA is
         | second line (should we release?), feature testing, regression,
         | UX continuity, etc. There's value in it if you can afford it.
        
       | annoyingnoob wrote:
       | I remember a time before agile and devops. Seems like QA has
       | always been looked down on, and always considered a bottleneck.
        
       | oaththrowaway wrote:
       | I had a boss at Yahoo who gave our QA to another team because
       | "Facebook doesn't use QA, we shouldn't either". I can't remember
       | if it was Facebook or MS, but he was willing to buy all of us a
       | book talking about how amazing it was.
       | 
       | Long story short, it wasn't. It was like taking away a crutch. Of
       | course we could have been more diligent about testing before
       | having QA validate it, but it slowed development down so much
       | trying to learn all the things we never thought to test that QA
       | did automatically.
        
         | robocat wrote:
         | An article about Facebook's reason for no QA with some of the
         | mitigations:
         | 
         | https://blog.southparkcommons.com/move-fast-or-die/
         | 
         | A bit recent to have affected Yahoo - but it sells a good
         | story.                 We would celebrate the first time
         | someone broke something.            Let anyone touch any part
         | of the codebase and get in there to fix a bug or build a
         | feature. Yes, this can cause bugs. Yes, that is an acceptable
         | tradeoff. We had a shared mythology about the times that the
         | site went down and we all rallied together to quickly bring it
         | back up.
         | 
         | Sounds like hell: running as close to the edge of the cliff as
         | you can. Presumably totally ignoring thousands of papercuts of
         | slightly broken functionality. Optimising to produce an
         | infinite number of shallow bugs.
        
       | amlozano wrote:
       | This point is brought up in the article but I think it is at the
       | real heart of the issue.
       | 
       | QA is almost always seen as a 'cost center' by the business and
       | upper management. I have a hypothesis that you never ought to
       | work in a department that is seen as a 'cost center'. The
       | bonuses, the recognition, and the respect always goes to the
       | money makers. The cost center is the first place to get more work
       | with less hands, get blamed for failures, and ultimately fired
       | when the business needs to slim up. I think the same thing
       | applies to IT.
       | 
       | This spiral is why QA will always be a harder career than just
       | taking similar skills and being a developer. It self reinforces
       | that the best people get fed up and switch out as soon as they
       | can.
        
         | robofanatic wrote:
         | > QA is almost always seen as a 'cost center' by the business
         | and upper management
         | 
         | Well everything involved in making a product is seen as a cost,
         | that includes the entire development team - QA, Developers,
         | Devops, PM ....
        
           | rubidium wrote:
           | No. That's not actually how most orgs break it down. R&D,
           | marketing, sales is "bringing new business" so are profit
           | centers. This means their budget grows with revenue.
           | Manufacturing, QA, IT and service are cost centers so get
           | squeezed year-over-year even if revenue is flat.
        
         | serial_dev wrote:
         | Even as a developer (mobile app developer) I feel like one has
         | to be careful not to work on "cost center" things.
         | 
         | Accessibility, observability, good logging, testing
         | infrastructure improvements, CI/CD tweaks, stability, better
         | linting and analyzer issues are all important, but you will be
         | rewarded if you ship features fast.
         | 
         | This year I spent too much time on the former because I felt
         | like that's what the team and app needed, because nobody on the
         | team priorized these issues, and I'll be sweating at the end of
         | the year performance reviews.
         | 
         | Now knowing this, I understand why the others didn't want to
         | work on these items, so next year, I'll be wiser, and I'll
         | focus on shipping features that get me the most visibility.
         | 
         | Sorry for the bugs in the app, but I need a job to pay my
         | mortgage.
        
           | tstrimple wrote:
           | The purpose behind all those things you were pursuing (apart
           | from accessibility) should have been to increase the rate at
           | which the team is able to ship features. If your work on
           | these items over the course of a year haven't demonstrably
           | improved delivery speed, then what value did they actually
           | bring? If they have improved delivery speed and you can show
           | evidence for that, why would you be nervous going into a
           | review?
        
             | serial_dev wrote:
             | I'm not sure showing evidence that the changes I made
             | improved the delivery speed will is as easy as you make it
             | out to be.
             | 
             | Over the year, there are many things that influence the
             | teams productivity and it is hard to measure to begin with,
             | and in the end, even if I managed to do it, the case I'll
             | be making is that "I made Tom and Gina slightly faster, I
             | promise it was me behind their success".
             | 
             | Not an easy sell, honestly not even a good look, honestly
             | it's much easier and safer to just focus on shipping
             | features myself.
        
         | bumby wrote:
         | I agree with the 'cost center' sentiment, but I'll try to add
         | some nuance from my experience.
         | 
         | 1) Some organizations have come to really value what QA/QC
         | brings to the table. From my experience, this seems to be more
         | visible in manufacturing than software. I speculate this is
         | because software is more abstract by its very nature and waste
         | is harder to track.
         | 
         | 2) The really good QAs are those who really believe in its
         | mission, rather than those who are looking for the path of
         | least resistance.
         | 
         | Both of those underscore the value lies in organizations and
         | individuals who really buy-in to the QA ethos. There are lots
         | of examples of both who are simply going through the motions.
        
         | perlgeek wrote:
         | > I have a hypothesis that you never ought to work in a
         | department that is seen as a 'cost center'
         | 
         | That's why I don't work in an IT department of a traditional
         | business.
        
       | hasoleju wrote:
       | I completely agree with the sentiment of this article: It is a
       | big problem that being a "software tester" is not at all as
       | prestigious as being a software engineer. Having someone who
       | really understands how the users interact with the software and
       | systematically covers all behavior in test cases is very
       | valuable.
       | 
       | I experienced both worlds: I worked in an organization where 4 QA
       | engineers tested each release that was built by 6 software
       | engineers. Now I'm in a situation where 0 QA engineers test the
       | work of 8 software engineers. In the second case the software
       | engineers actually do all the testing, but not that
       | systematically because it's not their job to optimize the testing
       | process.
       | 
       | Having someone with the capabilities of a software engineer who's
       | daily work is uncovering defects and verifying functionality is
       | important. Paying someone who owns the testing process is more
       | than justified commercially. The problem is: You don't find those
       | people. For various reasons. Therefor you are stuck with making
       | the software engineers do the testing.
       | 
       | But there is hope. There is a new standard released for my
       | industry that requires organizations to have a QA department that
       | is independent of the software engineering department. If they
       | don't have that, they are not allowed to role out there software
       | product complaint to the standard. Maybe this will help to
       | reintroduce the QA Engineer as an important and prestigious role.
        
         | zabzonk wrote:
         | what industry is that?
        
           | hasoleju wrote:
           | Mechanical engineering and shopfloor software for factory
           | automation in Europe. There is a new IT security standard
           | released that also includes requirements for the software.
        
       | godelski wrote:
       | The main problem with QA teams is the same problem with IT teams
       | or even management. If they are doing their jobs well they appear
       | to be doing nothing.
       | 
       | This often creates a situation where people need to "justify"
       | their jobs. Usually this happens due to an over reliance upon
       | metrics (see Goodhart's Law) rather than understanding what the
       | metrics are proxying and what the actual purpose of the job is. A
       | bad QA team is one who is overly nitpicky, looking to ensure they
       | have something to say. A good QA team simultaneous checks for
       | quality as well as trains employees to produce higher quality.
       | 
       | I do feel like there is a lack of training going on in the
       | workforce. We had the "95%-ile isn't that good"[0] post on the
       | front page not long ago and literally it is saying "It's easy to
       | get to the top 5% of performers in any field because most
       | performers don't actively train or have active feedback." It's
       | like the difference between being on an amateur sports team vs a
       | professional. Shouldn't businesses be operating like the latter?
       | Constantly training? Should make hiring be viewed differently
       | too, as in "can we turn this person into a top performer" rather
       | than "are they already" because the latter isn't as meaningful as
       | it appears when your environment is vastly different than the one
       | where success was demonstrated.
       | 
       | [0] https://news.ycombinator.com/item?id=38560345
        
         | philk10 wrote:
         | When I work with new devs I can often trip them up with basic
         | tests of double-clicking, leading spaces, using the back button
         | on Android. They then learn these and from then on these issues
         | dont appear ( well, OK, it might take a couple of times of a
         | ticket being rejected because of these but they do quickly
         | learn all my tricks ) I don't get measured on bugs found so
         | there's no pressure on me to find stupid bugs just to boost my
         | figures.
        
           | hotpotamus wrote:
           | I had a buddy who liked to do that kind of thing. I think his
           | favorite trick was just to enter nothing in a form and hit
           | enter to see what happens. It's probably his favorite because
           | it deleted the database on some little thing I wrote at one
           | point and we got a good laugh over it and I got a good lesson
           | out of it.
        
             | philk10 wrote:
             | Yep, I start off by entering nothing, then just spaces,
             | then special characters and then finally get to entering
             | some typical data
        
           | godelski wrote:
           | > I don't get measured on bugs found so there's no pressure
           | on me to find stupid bugs just to boost my figures.
           | 
           | Sounds like the right incentive structure. If you don't mind,
           | how are you judged? Do you feel like the system you're in is
           | creating the appropriate incentives and actually being
           | effective? Certainly this example is but I'd like to know
           | more details from an expert so I can update my understanding.
        
             | philk10 wrote:
             | The system is really effective, I wanted to work at a place
             | where the cliche "everyone cares about quality" is actually
             | true and I found it - devs test, designers test, I test,
             | customer has the chance to test the latest build every 2
             | weeks so that we can check that our quality checks are
             | aligning with theirs. It gets to be a game of 'can the devs
             | get it past my checks' and 'can I find new ways to trip
             | them up' which builds up confidence in each others skill
             | levels.
        
         | mattgreenrocks wrote:
         | > Shouldn't businesses be operating like the latter? Constantly
         | training?
         | 
         | There's a rampant cultural mind-virus that argues that 95%th
         | percentile is somehow _tons_ of work (rather than a lack of
         | unforced mistakes), so everyone just writes it off. It 's on
         | full display at this very site. Just look on any post involving
         | software quality, and read a bunch of comments suggesting
         | widespread apathy from engineers.
         | 
         | Obviously every situation is different, but people seem to be
         | pretty okay with relinquishing agency on these things and just
         | going along with whatever local maxima their org operates in.
         | It's not totally their fault, but they're not blameless either.
        
           | godelski wrote:
           | Yeah it is weird that it is believed that there is a linear
           | scale to work in and quality considering how well known
           | pareto/power distributions are. These distributions are
           | extremely prolific too. I mean we even codify that sentiment
           | in the 80/20 rule or say that 20% of time is writing code and
           | 80% is debugging it. What's interesting is this effect is
           | scalable. Like you see this when comparing countries by
           | population but the same distribution (general shape) exists
           | when looking at populations of states/regions/cities (zooming
           | in for higher resolution) for any given country or even down
           | to the street level.
           | 
           | > Obviously every situation is different, but people seem to
           | be pretty okay with relinquishing agency on these things and
           | just going along with whatever local maxima their org
           | operates in. It's not totally their fault, but they're not
           | blameless either.
           | 
           | I agree here, to the letter. I don't blame low level
           | employees for maximizing their local optima. But there's two
           | main areas (among many) that just baffles me. The first is
           | when this is top down. When a CEO and board are hyper focused
           | on benchmarks rather than the evaluation. Being unable to
           | distinguish the two (benchmarks and metrics are guides, not
           | answers). The other is when you have highly trained and
           | educated people actively ignoring this situation. To have an
           | over-reliance on metrics and refusing to acknowledge that
           | metrics are proxies and considering the nuances that they
           | were explicitly trained to look for and is what meaningfully
           | distinguishes them from less experienced people. I've been
           | trying to coin the term Goodhart's Hell to describe this more
           | general phenomena because I think it is a fairly apt and
           | concise description. The general phenomena seems prolific,
           | but I agree that the blame has higher weight to those issuing
           | orders. Just like a soldier is not blameless for their
           | participation in a war crime but the ones issuing the orders
           | are going to receive higher critique due to the imbalance of
           | power/knowledge.
           | 
           | Ironically I think we need to embrace the chaos a bit more.
           | But that is rather in recognizing that ambiguity and
           | uncertainty is inescapable rather than abandonment of any
           | form of metric all together. I think modern society has
           | gotten so good at measuring that we often forget that our
           | tools are imprecise whereas previously the imprecision was so
           | apparent that it was difficult to ignore. One could call this
           | laziness but considering its systematic I'm not sure that's
           | the right word.
        
         | JohnMakin wrote:
         | > This often creates a situation where people need to "justify"
         | their jobs.
         | 
         | On DevOps teams I see this constantly. Usually the best-
         | compensated or most senior "Ops" guy or whatever they're called
         | at the company spends a lot of his time extinguishing fires
         | that were either entirely of his own creation/incompetence,
         | which makes it look like he's "doing something." You automate
         | away the majority of the toil there and this person doesn't
         | have a job, yet this pattern is so insanely common. There's
         | little incentive to do it right when doing it right means
         | management thinks you sit there all day and do nothing.
        
           | godelski wrote:
           | That's a fantastic example of what I'm trying to describe.
           | It's kinda like thinking hours worked is directly
           | proportional to widgets produced. There certainly are jobs
           | and situations where this relationship holds (can't sell
           | widgets if you aren't manning the store or can't produce turn
           | crank widgets if the crank isn't being turned). But modern
           | world widgets don't work that way and are more abstract.
           | Sometimes fewer hours creates more widgets, sometimes the
           | reverse. But widget production is now stochastic and
           | especially in fields where creativity and brain power are
           | required. (Using widgets for generalization -- in the
           | economic sense--, insert any appropriate product or ask and
           | I'll clarify)
        
           | tstrimple wrote:
           | Which is one reason DevOps teams don't make sense. DevOps is
           | a skill developers need to have. It needs to be embedded
           | within the development team, not some other team's
           | responsibility who only focuses on "DevOps" work. You create
           | the build and deployment pipelines and move on to other
           | project work. If you give someone a role and say their job is
           | to do "DevOps" they will HAVE to invent things to do because
           | that's such a small part of a project and once implemented
           | doesn't need a ton of maintenance.
        
             | ponector wrote:
             | It is not about DevOps. In every organization if you do
             | your work good and have no fuckups - you will not be
             | recognized and promoted. But if you are hero-firefighter -
             | managers will love you and help with promotion.
             | 
             | Because visibility is a key! Key to everything in the
             | corporate life. If your work is not visible for managers -
             | you are doing nothing.
        
           | slv77 wrote:
           | Sometimes people want to be firefighters to protect the
           | people they serve but others love how being in a crisis makes
           | them feel alive. The later are why arson investigators first
           | look at firefighters when doing an arson investigation. Some
           | of them need the fires and will start them just to fight
           | them.
           | 
           | There are a lot of these types that gravitate to crisis
           | management roles like DevOps.
        
         | bsder wrote:
         | > Shouldn't businesses be operating like the latter? Constantly
         | training?
         | 
         | "But then they'll leave for somewhere else for more money."
         | </sarcasm>
         | 
         | Literally every company I have worked for. Meaningful training
         | was _always_ an uphill battle.
         | 
         | However, good training also requires someone _good and broad_
         | in your technical ladder. They may not be the most up-to-date,
         | but they need to be able to sniff out bullshit and call it out.
         | 
         | FAANG is no exception, either.
        
       | itqwertz wrote:
       | I noticed this trend a couple of years ago, they called it Shift-
       | Left.
       | 
       | Basically, it was a way to get a developer to do more testing
       | during development before handing over a feature to QA. This sets
       | up the imagined possibility of firing all QA staff and having
       | developers write perfect code that never needs to be tested
       | thoroughly. Looks great on paper...
       | 
       | At a previous company, they started firing all of the manual QA
       | devs and replacing them with offshore QA people who could do
       | automated testing development (Cypress tests). The only problem
       | was that those fired QA team members had significant business
       | domain knowledge that never was transferred or written down. The
       | result was a dumpster fire every week, with the usual offshore
       | nightmares and panicked responses.
       | 
       | Make no mistake about this, it's just a cost-cutting measure with
       | an impact that is rarely felt immediately by management. I've
       | worked with stellar manual QA people and our effort resulted in
       | bulletproof software that handled edge cases, legacy customers,
       | API versions, etc. Without these people, your software eventually
       | becomes IT-IS-WHAT-IT-IS and your position is always threatened
       | due to the lack of quality controls.
        
       | evilantnie wrote:
       | QA has always been about risk management. There are multiple ways
       | to manage risk, and some of those ways can be more cost effective
       | to a business. As software shifted towards SaaS offerings,
       | deployments (and rollbacks) became quicker, customer feedback
       | loops also got lightning fast. Team's can manage the risk of a
       | bug more efficiently by optimizing for mean-time-to-recovery.
       | This muscle is not one that QA teams are particularly optimized
       | for, thus their effectiveness in this new model was reduced. I've
       | found that holding on to QA function in this environment can
       | severely dilute the ownership of quality as a requirement from
       | engineers.
       | 
       | QA is still extremely valuable in any software that has long
       | deployment lead times. Mobile apps, On-Prem solutions, anything
       | that cannot be deployed or rolled back within minutes can benefit
       | from a dedicated QA team that can manage the risk appropriately.
        
         | spinningD20 wrote:
         | There are so so many instances where "rolling back" is just not
         | a feasible solution. Working for a SaaS company with
         | mobile/web/api and huge db's, migrations, payroll uses in the
         | product, rolling back is and should always be a LAST RESORT. In
         | 99% of the cases, something significant enough to want to roll
         | back usually results in a "hot patch" workflow instead because
         | rolling back or etc has its own risk.
         | 
         | > QA has always been about risk management.
         | 
         | 100%.
         | 
         | QA should be related to identifying risk, likelihood of
         | failure, impact of failure to user, client and company. The
         | earlier this is done in the varying processes, the better.
         | ("shift left" but I've seen a ton of differences with how
         | people describe this, but generally QA should start getting
         | involved in the "design phase")
         | 
         | Another example from my own first-hand experience:
         | 
         | A company I worked for made a product that plugged into
         | machines that were manufacturing parts, and based on your
         | parameters it would tell you whether or not the part was "good"
         | or "bad".
         | 
         | When interviewing the leadership of the company, as well as the
         | majority of the engineering group, "what is the biggest risk
         | with this product" they all said "if the product locks up!".
         | Upon further discussion, I pulled out a much larger, insidious
         | risk; "what if our product tells the client that the part is
         | 'good' when it is not?"
         | 
         | In this example, the part could be involved in a medical device
         | that keeps someone alive.
         | 
         | You're not going to be able to roll that back.
        
       | blastbking wrote:
       | Agree with the sentiment of this article but the disturbing ai
       | generated images every paragraph were definitely not necessary -
       | do people actually need to see these?
        
         | jawns wrote:
         | It did strike me as ironic that the article is about ruthlessly
         | automating to avoid paying QA engineers, and it uses AI to
         | avoid paying illustrators.
        
           | willsmith72 wrote:
           | not saying this is you, but i get so tired of feedback about
           | ai-generated images along the lines of "you're taking money
           | away from local artists"
           | 
           | it's not one or the other. in my experience it's a decision
           | of "no images" vs "ai images".
           | 
           | in this case, probably "no images" would've been better for
           | the reading experience. but there was never any illustrator
           | getting paid
        
         | Night_Thastus wrote:
         | I'd rather have stock or existing images, or none at all.
         | 
         | Every time I looked at one of those AI images my brain just
         | kept seeing all the little weird parts that didn't make sense.
         | Like a brain itch.
        
       | somewhereoutth wrote:
       | Systems used by humans have to be tested by humans. Those testers
       | can either be your customers, or your QA team - as your dev and
       | sales teams will be busy doing other things.
        
       | amtamt wrote:
       | > The most conscientious employees in your organization are the
       | most bitter. They see the quality issues, they often address
       | them, and they get no recognition for doing so. When they speak
       | up about quality concerns, they get treated like mouthbreathers
       | who want to slow down. They watch the "move fast and break
       | things" crowd get rewarded time after time, while they run around
       | angrily cleaning up their messes. To these folks, it feels like
       | giving a damn is a huge career liability in your organization.
       | Because it is.
       | 
       | This is the bitter truth, no one wants to acknowledge.
       | 
       | DBAs and Infra, are in the same boat as QAs. Pendulam will swing
       | back in not so long time frame i hope.
        
         | esafak wrote:
         | No, just jump ship and let the damn company fail. Fail faster,
         | haha! This is how we have nice things; when bad companies are
         | not propped up.
        
       | karmakaze wrote:
       | Having a QA team is like having an Ops team with stuff being
       | 'thrown over the wall' to the downstream.
       | 
       | There's two kinds of tests. Regression testing, that should be
       | automated and written and maintained by devs. New feature or
       | change testing should be done by those that defined them, namely
       | Product people. In the best case it's an iterative and
       | collaborative process, where things can be evaluated in dev/test
       | environments, staging environments, or production for beta flag
       | enabled test users.
        
       | Zelphyr wrote:
       | I worked at two companies 15-20 years ago that invested in top-
       | tier QA teams. They were worth their weight in gold. The products
       | were world class because the QA team were fantastic at finding
       | bugs we developers didn't think of looking for because we were
       | too close to the problem. We are too used to looking at the happy
       | path.
       | 
       | One key attribute to both companies is that it was dictated from
       | on high that the QA team had final say whether the release went
       | to production or not.
       | 
       | These days companies think having the developers write automated
       | tests and spend an inordinate amount of time worrying over code
       | coverage is better. I can't count how many products I've seen
       | with 100% code coverage that objectively, quantifiably doesn't
       | work.
       | 
       | I'm not saying automated testing is bad. I'm saying, just as the
       | author does, that doing away with human QA testers is.
        
         | supportengineer wrote:
         | I've seen QA/QE greatness and it was similar to how you
         | describe. A different chain of command for deciding if releases
         | are certified for production. Different incentive structures as
         | well.
         | 
         | Not to mention, at one recent employer, the QE team wrote an
         | enormous amount of code to perform their tests - It was more
         | LOC than the modules being tested/certified.
        
           | eitally wrote:
           | I had a team like that once. It was glorious. And ultimately,
           | I'm convinced it led to overall faster development cycles
           | because the baseline code quality & documentation was so much
           | better than it would have been without such a great QA
           | manager. The QA team, of course, was also technical -- mostly
           | with SWE backgrounds -- and they were primarily colo'd in the
           | same office as the dev team. I still remember the epiphany
           | everyone had one planning cycle when it was mutually
           | understood that by generally agreeing to use TDD, the QA team
           | could participate actively in the _real_ engineering planning
           | and product development process.
           | 
           | ... Then I left and my CIO let go the onshore QA team in
           | favor of near term cost savings. Code quality went way down
           | and within a year or two several apps needed to be entirely
           | rewritten. Everything slowed down and people started pointing
           | fingers, and before you knew it, it was time for  "cloud
           | native rearchitecting/reengineering" which required an SI to
           | come in with "specialists".
        
           | esafak wrote:
           | So how was QA incentivized?
        
         | fizx wrote:
         | How often did you release?
        
         | alkonaut wrote:
         | I love my really thorough QA's. Yes it's an antipattern to let
         | me as a dev lean too much on them catching what I won't. But
         | where I dread even running the code for a minute, they enjoy
         | it. They take pride in figuring out edge cases far beyond any
         | spec. They are definitely worth their weight in gold. It lets
         | developers have confidence when changing things in the same
         | sense a good type system does. For some classes of very
         | interactive apps (e.g games) having unit and integration tests
         | just doesn't cover the parameter space.
        
           | ponector wrote:
           | People here are talking about skillful QA worth their weight
           | in gold.
           | 
           | Unfortunately people in the industry who have actual power in
           | planning budgets don't think so. An article is right. QA
           | engineers now are viewed as janitors: no one respects then,
           | better to outsource to cheap location.
        
         | PH95VuimJjqBqy wrote:
         | The QA culture has to be there, not just dictates that the QA
         | has final say.
         | 
         | I've seen companies where that's true and it was still trash
         | because the QA were mostly low-paid contract workers who only
         | did exactly what they were told and no more.
        
           | onlyrealcuzzo wrote:
           | Worked at a company where QA had the final say - and that was
           | by far the most toxic / worst environment I have ever been
           | in.
           | 
           | QA also REFUSED to let developers write automation tests,
           | also REFUSED to let us run them ourselves.
           | 
           | What a nightmare.
           | 
           | YMMV, but just having the final say is not a silver bullet
           | for sure.
        
             | mikestew wrote:
             | That's not because "QA had the final say", it's because
             | your QA team were ass clowns. Any QA team that discourages
             | dev from writing or running tests needs to be burnt to
             | ground and rebuilt.
        
               | ponector wrote:
               | Unless dev is switching to write test 100% of time and
               | become a tester it is highly not recommended to let
               | developers write tests.
               | 
               | That QA was not a clown, they've seen some shit...
               | 
               | Would you let QA to write features in your production
               | code?
        
               | ahtihn wrote:
               | Test code is just code. If you can write test code you
               | can write production code. If you can write production
               | code you can write tests.
               | 
               | If your concern is that devs don't have the right mindset
               | for testing, you can have them collaborate with a QA
               | specialist to define the test cases and review the test
               | implementation.
        
               | ponector wrote:
               | In theory yes.
               | 
               | In practice, devs will not write good tests for their
               | features, QA will be kept away from committing to
               | production code.
               | 
               | Btw, if it is just code - why developers cannot implement
               | features without bugs? No need in QA in such ideal world.
        
             | VHRanger wrote:
             | You see that with devops often. When they are incentives to
             | block stuff instead of enabling the product they become the
             | roadblock team.
             | 
             | It's why Google has SRE instead of devops people largely
        
         | ponector wrote:
         | I've seen developers writing tests with no assertions, or with
         | assertTrue(true). Always green, with 100% coverage!
         | 
         | The same people have been asking: why should I write tests if I
         | can write new features?
         | 
         | And then one senior QA comes and destroys everything.
         | 
         | Once I found that if I press f5 50 times during a minute then
         | backend will go in outOfMemory while spinning requests to the
         | database.
        
           | kcb wrote:
           | That's why the dev teams tests and the QA teams tests are not
           | mutually exclusive.
        
           | feoren wrote:
           | > I've seen developers writing tests with no assertions
           | 
           | This _can_ be OK if the code executing without throwing
           | exceptions is itself testing something. If you have a lot of
           | assertions written directly into the code, as pre- or post-
           | conditions for instance. But I 'm guessing that wasn't the
           | case here.
        
         | mikestew wrote:
         | _I can 't count how many products I've seen with 100% code
         | coverage that objectively, quantifiably doesn't work._
         | 
         | That's because code coverage doesn't find the bugs that result
         | from code you didn't write, but should have. Code coverage is
         | but _one_ measure, and to treat it as _the_ measure is folly.
         | 
         | (But, yes, I have heard a test manager at a large software
         | company we've all heard of declare that test team was done
         | because 100% coverage.)
        
       | RedShift1 wrote:
       | Now just to convince msft of this fact because their shit's been
       | breaking left and right and all over the place.
        
       | temuze wrote:
       | I strongly disagree.
       | 
       | I worked at a company with a world-class QA team. They were
       | amazing and I can't say enough nice things about them. They were
       | comprehensive and professional and amazing human beings. They had
       | great attention to detail and they catalogued a huge spreadsheet
       | of manual things to test. Engineers loved working with them.
       | 
       | However -- the end result was that engineers got lazy. They were
       | throwing code over to QA while barely testing code themselves.
       | They were entirely reliant on manual QA, so every release bounced
       | back and forth several times before release. Sometimes, we had
       | feature branches being tested for months, creating HUGE merge
       | conflicts.
       | 
       | Of course, management noticed this was inefficient, so they
       | formed another team dedicated to automated QA. But their coverage
       | was always tiny, and they didn't have resources to cover every
       | release, so everyone wanted to continue using manual QA for CYA
       | purposes.
       | 
       | When I started my own company, I hired some of my old engineering
       | coworkers. I decided to not hire QA at all, which was
       | controversial because we _loved_ our old QA team. However, the
       | end result was that we were much faster.
       | 
       | 1. It forced us to invest heavily on automation (parallelizing
       | the bejesus out of everything, so it runs in <15min), making us
       | much faster
       | 
       | 2. Engineers had a _lot_ of motivation to test things well
       | themselves because there was no CYA culture. They couldn't throw
       | things over a wall and wash their hands of any accountability.
       | 
       | We also didn't have a lack of end-to-end tests, as the author
       | alludes to. Almost all of our tests were functional / integration
       | tests, that run on top of a docker-compose set up that simulated
       | production pretty well. After all, are unit tests where you mock
       | every data source helpful at all? We invested a lot of time in
       | making realistic fixtures.
       | 
       | Sure, we released some small bugs. But we never had huge, show
       | stopping bugs because engineers acted as owners, carefully
       | testing the worst-case scenarios themselves.
       | 
       | The only downside was that we were slower to catch subtle, not-
       | caught-by-Sentry bugs, so things like UX transition weirdness.
       | But that was mostly okay.
       | 
       | Now, there is still a use case for manual QA -- it's a question
       | of risk tolerance. However, most applications don't fit that
       | category.
        
         | mixmastamyk wrote:
         | False dichotomy. Poor dev practice is not fixed by elimination
         | of QA, but rather fixed by improving dev practice. The "five
         | why's" can help.
        
       | bgribble wrote:
       | I was lucky enough to work in a small eng team with 1 full-time
       | dedicated QA person. One of the very few coworkers from my long
       | career that I have really tried hard to poach away from whatever
       | they were doing after our shared workplace went bust.
       | 
       | Yes, part of the job was to write and run manual test suites, and
       | to sign off as the DRI that a certain version had had all the
       | automated and manual tests pass before release.
       | 
       | But their main value was in the completely vague mandate "get in
       | there and try to break it." Having someone who knows the system
       | and can really dig into the weak spots to find problems that devs
       | will try to handwave away ("just one report of the problem?
       | probably just some flaky browser extension") is so valuable.
       | 
       | In my current job, I have tried for 5+ years to get leadership to
       | agree to a FT QA function. No dice. "Developers should test their
       | own code." Yeah and humans should stop polluting the ocean and
       | using fossil fuels, how's that going?
        
         | ncphil wrote:
         | "Developers should test their own code" is emblematic of a
         | juvenile mindset in people who regularly fire up their "reality
         | distortion field" to avoid the effort of educating themselves
         | on their own operations (and that helps them deny
         | responsibility when things go South). As W. Edwards Deming,
         | bane of all "gut instinct" executives, once wrote, "The
         | consumer is the most important part of the production line.
         | Quality should be aimed at the needs of the consumer, present
         | and future." The lack of a dedicated quality team shows a lack
         | of respect for your customers. You know, the people you need to
         | buy your products or services (unless you're intent on living
         | off VC loans until you have to pull the ripcord on your golden
         | parachute).
        
       | notnmeyer wrote:
       | i never felt that devops and qa were at odds the way that this
       | article suggests they are. in my experience nobody wants to or
       | knows how to do run QA correctly so the org shoots themselves in
       | the foot and does one of two things:
       | 
       | 1. hire a contractor who just has no idea about anything. 2. hire
       | someone and place them outside the engineering org (on the
       | support team as a "support engineer" seems pretty popular) where
       | they have little to no interaction with either engineering _or_
       | customers and expect them to work miracles.
        
       | Ensorceled wrote:
       | I hired a great QA Developer a few months ago. They are building
       | out integration, performance and end to end tests; finding bugs,
       | speeding up releases and generally making things better.
       | 
       | I get asked, every week, if they are contributing and what are
       | they contributing.
       | 
       | It's exhausting, so I can't imagine what it feels like to
       | actually BE in QA.
        
         | warkdarrior wrote:
         | # of bugs found per week should be sufficient metric of
         | productivity.
        
       | l72 wrote:
       | At our small tech company, QA is elevated to a whole different
       | level. The QA lead(s) are involved in all product planning
       | meetings and develop the requirements with the product team. Our
       | QA lead has a phd and two have masters degrees! They know how the
       | application is supposed to work better than most of the
       | developers and play a big role throughout development. In my
       | opinion (as the person that leads the developers), this is how it
       | should work. They aren't some separate team we chunk over stuff
       | to at the end of the day.
        
       | natbennett wrote:
       | Strongly disagree with the literal premise of this post. The idea
       | of having a separate team with the mandate to "ensure quality"
       | was always ridiculous. Occasionally it was helpful by accident
       | but it was always ridiculous. "Quality" isn't something you can
       | bake in afterwards by testing a bunch.
       | 
       | Getting rid of everyone with testing expertise, and treating
       | testing expertise as inherently less valuable that implementation
       | expertise? Sure, you could convince me that was a bad idea.
        
         | spinningD20 wrote:
         | Doing every quality activity "after the fact" I agree is the
         | issue. That's the root of the problem you're seeing, not that
         | there was a separate quality team.
        
       | ThalesX wrote:
       | I am currently working with a startup that spends a lot of time
       | on building tests that need to be refactored every sprint because
       | it's early stage and the assumptions change. I am shocked at the
       | amount of developer-hours spent on tests that need to be disabled
       | / deleted instead of just hiring 1 - 2 super cheap manual testers
       | that just go through the flows days in and out.
       | 
       | For me it's a no brainer, if I were CEO / CTO, until product-
       | market-fit is achieved and exponential growth is visible, I'd
       | just outsource Q&A and that's that.
        
         | spinningD20 wrote:
         | When outsourced, you either A) rely on someone in your org to
         | tell them what to test and what the workflows are, ie use them
         | as a warm body/monkey to click on things for you - this is what
         | most people see QA as, which is silly - or B) you rely on the
         | outsourced QA to know your product and know what is important
         | or what all of the edge cases are.
         | 
         | If your product is non-trivial in size or scope, ie it is not a
         | cookie-cutter solution, then the testing of your product will
         | also be non-trivial if you want it to work and have a good
         | reputation (including during those all-important live demos,
         | poc's, etc).
         | 
         | QA does not mean "click on things and go through the happy path
         | and everything is fine" - not saying you are implying that, but
         | gosh the amount of companies that think it's child's play is
         | crazy.
        
       | amaterasu wrote:
       | Ignoring the common trope that developers are bad testers (I am,
       | but not all devs are), QA presence allows teams to move faster by
       | reducing the developer test burden to automated regression, and
       | developer acceptance testing only. Good QA can often assist with
       | those tasks too, further improving team velocity. Also, moving
       | tasks to people who specialise in them is not usually a poor
       | decision.
       | 
       | The best way I've found to sell QA to management (especially
       | sales/marketing/non-technical management), is to redefine them as
       | marketing. QA output is as much about product and brand
       | reputation management as finding bugs. IMO, nothing alienates
       | customers faster than bugs, and bad experiences result in poor
       | reputation. Marketing and sales people can usually assign value
       | to passive marketing efforts, and recognise things that are
       | damaging to retention and future sales.
        
       | pjsg wrote:
       | At the start of my career (late 70s), I worked at IBM (Hursley
       | Park) in Product Assurance (their version of QA). We wrote code
       | and built hardware to test the product that was our area of
       | responsibility (it was a word processing system). We wrote test
       | cases that our code would drive against the system under test.
       | Any issues we would describe in general terms to the development
       | team -- we didn't want them to figure out our testcases -- we
       | wanted them to fix the bugs. Of course, this meant that we would
       | find (say) three bugs in linewrapping of hyphenated words and the
       | use of backspace to delete characters, and then the development
       | team would fix four bugs in that area _but_ only two of the
       | actual bugs that we had found. This meant that you could use
       | fancy statistics to estimate the actual number of bugs left.
       | 
       | When I've worked for organizations without QA teams, I introduce
       | the concept of "sniff tests". This is a short (typically 1 hour)
       | test session where anybody in the company / department is
       | encouraged to come and bash on the new feature. The feature is
       | supposed to be complete, but it always turns out that the edge
       | cases just don't work. I've been in these test session where we
       | have generated 100 bug tickets in an hour (many are duplicates).
       | I like putting "" into every field and pressing submit. I like
       | trying to just use the keyboard to navigate the UI. I run my
       | system with larger fonts by default. I sometime run my browser at
       | 110% zoom. It used to be surprising how often these simple tests
       | would lead to problems. I'm not surprised any more!
        
         | jxramos wrote:
         | that's very interesting to hide the source of the automated
         | tests from the developers as a strategy. I can see that
         | shifting the focus to not just disabling the test or catering
         | to the test etc. I'll have to think about this one, there's
         | some rich thoughts to meditate on with this one.
        
           | ansible wrote:
           | It is an interesting approach I hadn't heard of before. For
           | complex systems though, often reproducing the bug reliably is
           | a large part of the problem. So giving the developers the
           | maximum information is necessary.
           | 
           | Any time a "fix" is implemented, _someone_ needs to be asking
           | the right questions. Can this type of problem occur in other
           | features  / programs? What truly is the root cause, and how
           | has that been addressed?
        
         | pavel_lishin wrote:
         | > _When I 've worked for organizations without QA teams, I
         | introduce the concept of "sniff tests". This is a short
         | (typically 1 hour) test session where anybody in the company /
         | department is encouraged to come and bash on the new feature._
         | 
         | We call those bug-bashes where we work, and they're also
         | typically very productive in terms of defects discovered!
         | 
         | It's especially useful since during development of small
         | features, it's usually just us programmers testing stuff out,
         | which may not actually reflect how the _end users_ will use our
         | software.
        
           | steveBK123 wrote:
           | A good QA person is basically a personification of all the
           | edge cases of your actual production users. Our good QA
           | person knew how human users used our app better than the dev
           | or product team. It was generally a competition between QA &
           | L2 support as to who actually understood the app best.
           | 
           | The problem with devs testing their own & other devs code is
           | that we test what we expect to work in the way we expect the
           | user to use it. This completely misses all sorts of
           | implementation error and edge cases.
           | 
           | Of course the dev tests the happy path they coded.. that's
           | what they thought users would do, and what they thought users
           | wanted! Doesn't mean devs were right, and frequently they are
           | not..
        
             | justinator wrote:
             | This dude gets it.
        
           | danny_taco wrote:
           | Maybe we work in the same company. I'd like to add that
           | usually the engineer responsible for the feature being bug-
           | bashed is also responsible of refining the document where
           | everyone writes the bugs they find since a lot are
           | duplicates, existing bugs, or not bugs at all. The output is
           | then translated into Jira to be tackled before (or after) a
           | release, depending on the severity of the bugs found.
        
         | wrs wrote:
         | At Microsoft back in the day, we called those "bug bashes", and
         | my startup inherited the idea. We encouraged the whole company
         | to take an afternoon off to participate, and gave out awards
         | for highest impact bug, most interesting bug, etc.
        
           | hornban wrote:
           | This is a bit of an aside, but I have a question that I'd
           | like to ask the wider community here. How can you do a proper
           | bug-bash when also dealing with Scrum metrics that result in
           | a race for new features without any regard for quality? I've
           | tried to do this with my teams several times, but ultimately
           | we're always coming down to the end of the sprint with too
           | much to do to implement features, and so anybody that "takes
           | time off" to do bug bashing looks bad because ultimately they
           | complete fewer story points than others that don't do it?
           | 
           | Is the secret that it only works if the entire company does
           | it, like you suggest?
           | 
           | And yes, I completely realize that Scrum is terrible. I'm
           | just trying to work within a system.
        
             | debatem1 wrote:
             | The team with the lowest bug bash participation this week
             | is the victim err host of next week's bug bash.
        
             | m4rtink wrote:
             | Seems like another data point stating sprints don't make
             | sense in real world projects ?
        
             | bigbillheck wrote:
             | Only assign points based on a (n-1)-day sprint instead of a
             | n-day one.
        
             | Shaanie wrote:
             | That's not a problem with Scrum, it's a problem with your
             | team. If you're doing a bug bash every sprint, then your
             | velocity is already including the time spent on bug bashes.
             | If it's not in every sprint, you can reduce the forecast
             | for sprints where you do them to account for it (similar to
             | what you do when someone is off etc).
             | 
             | If you're competing within the team to complete as many
             | story points as possible that's pretty weird. Is someone
             | using story points as a metric of anything other than
             | forecasting?
        
         | jrockway wrote:
         | I've always been impressed by hardware QA test teams I've
         | worked with. On Google Fiber, they had an elaborate lab with
         | every possible piece of consumer electronics equipment in
         | there, and would evaluate every release against a (controlled)
         | unfriendly RF environment. ("In version 1.2.3.4, the download
         | from this MacBook Pro while the microwave was running was
         | 123.4Mbps, but in version 1.2.4.5, it's 96.8Mbps." We actually
         | had a lot of complexity beyond this that they tested, like
         | bandsteering, roaming, etc.) I was always extremely impressed
         | because they came up with test cases I wouldn't have thought
         | of, and the feedback to the development team was always
         | valuable to act on. If they're finding this issue, we get pages
         | of charts and graphs and an invite to the lab. If a customer
         | finds this issue, it just eats away at our customer
         | satisfaction while we guess what could possibly have changed.
         | Best to find the issue in QA or development.
         | 
         | As for software engineers handling QA, I'm very much in favor
         | of development teams doing as much as possible. I often see
         | tests bolted on to the very end of projects, which isn't going
         | to lead to good tests. I think that software engineers are
         | missing good training on what to be suspicious of, and what
         | best practices are. There are tons of books written on things
         | like "how to write baby's first test", but honestly, as an
         | industry, we're past that. We need resources on what you should
         | look out for while reviewing designs, what you should look out
         | for while reviewing code, what should trigger alarm bells in
         | your head while you're writing code.
         | 
         | I'm always surprised how I'll write some code that's weird, say
         | to myself "this is weird", and then immediately write a test to
         | watch it change from failing to passing. Like times when you're
         | iterating over something where normally the exit condition is
         | "i < max", but this one time, it's different, it actually has
         | to be "i <= max". I get paranoid and write a lot of tests to
         | check my work. Building that paranoia is key.
         | 
         | > I like putting "" into every field and pressing submit.
         | 
         | Going deeper into the training aspect, something I find very
         | useful are fuzz tests. I have written a bunch of them and they
         | have always found a few easy-to-fix but very-annoying-to-users
         | bugs. I would never make a policy like "every PR must include a
         | fuzz test", but I think it would be valuable to tell new hires
         | how to write them, and why they might help find bugs. No need
         | to have a human come up with weird inputs when your idle CI
         | supercomputer can do it every night! (Of course, building that
         | infrastructure is a pain. I run them on my workstation when I
         | remember and it interests me. Great system.)
         | 
         | At the end of the day, I'm somewhat disappointed in the
         | standards that people set for software. To me, if I make
         | something for you and it blows up in your hands... I feel
         | really shitty. So I try to avoid that in the software world by
         | trying to break things as I make them, and ensure that if
         | you're going to spend time using something, you don't have a
         | bad experience. I think it's rare, and it shouldn't be, it
         | should be something the organization values from the top to the
         | bottom. I suppose the market doesn't incentive quality as much
         | as it should, and as a result, organizations don't value it as
         | much as they should. But wouldn't it be nice to be the one
         | software company that just makes good stuff that always works
         | and doesn't require you to have 2 week calls with the support
         | team? I'd buy it. And I like making it. But I'm just a weirdo,
         | I guess.
        
         | JonChesterfield wrote:
         | > This meant that you could use fancy statistics to estimate
         | the actual number of bugs left.
         | 
         | That's very clever. Precise test case in QA plus vague
         | description given to dev. Haven't seen it before, thank you for
         | sharing that insight.
        
       | pavel_lishin wrote:
       | Speaking of QA, using AI to generate comic-book-style
       | illustrations is great until one of your heroes has 6 and 8
       | digits per hand.
       | 
       | At least with the comic style, you could plausibly say that
       | that's canon to her character.
        
       | w10-1 wrote:
       | For QA to be respected and protected, it has to identify what
       | it's responsible for.
       | 
       | Luckily, that's easy: the "fault model", all the ways things can
       | break. That tends to be a lot more complex than the operating
       | model, the domain model, or the business model.
       | 
       | Once all the potential issues and associated costs for all the
       | fault models are enumerated, then QA can happily offer to any
       | other organization the responsibility for each one, and see who
       | steps up to take it on.
       | 
       | In many cases, it can be done more cheaply in design,
       | engineering, or automation; it's usually easier to prevent a
       | problem than capture, triage, debug, fix, and re-deploy.
       | 
       | Organizations commonly make the mistake of being oblivious to the
       | fault models and failing to allocate responsibility. That's
       | possible because most failures are rare, and the link from
       | consequences back to cause is often unclear. The responsibility
       | allocation devolves to blame, and blame to "who touched this
       | last"? But catastrophic feedback is a terrible way to learn, and
       | chronic irritants are among the best ways to lose customers and
       | staff.
        
         | bumby wrote:
         | I agree with you, but have the hunch that many PMs don't.
         | 
         | > _it 's usually easier to prevent a problem than capture,
         | triage, debug, fix, and re-deploy._
         | 
         | It really depends on the risk of the fault. To a PM under
         | schedule pressure, the higher risk may be to break schedule in
         | order to redesign to mitigate the fault. As you said, many
         | failures are low probability, so PMs are used to rolling the
         | dice and getting away with it. Often they've moved on before
         | those failures rear their ugly heads.
         | 
         | An organization really needs the processes that establish
         | guardrails against these biases. Establishing requirements to
         | use the tools to define the fault model can go a long way,
         | although I've seen people get away with turning a blind eye to
         | those requirements as well. You also need to mate it with
         | strong accountability.
        
       | nlavezzo wrote:
       | When we built FoundationDB, we had a maniacal focus on quality
       | and testing. So much so that we built it in a language we
       | invented, called Flow, that allowed us to deterministically
       | simulate arbitrary sized FDB clusters and subject them to crazy
       | conditions, then flag each system property violation and be able
       | to perfectly reproduce the test run that triggered the violation.
       | 
       | We got to a point where the default was that all of our 10,000's
       | of test runs each night would flash green if no new code was
       | introduced. Tests that flashed red were almost always due to
       | recent code additions, and therefore easily identified and fixed.
       | It let our team develop knowing that any bugs they introduced
       | would be quickly caught, and this translated to being able to
       | confidently take on crazy new projects - like re-writing our
       | transaction processing system post-launch and getting a 10x speed
       | increase out of it.
       | 
       | In the end our focus on quality led to velocity - they weren't
       | mutually exclusive at all. We don't think this is an isolated
       | phenomenon, which led us to our newest project - but that's a
       | story for another time.
        
       | rychco wrote:
       | Does Medium automatically insert these AI generated images into
       | every article now, or is that just the popular thing to do?
        
       | KaiserPro wrote:
       | So for me, the QA team is the best source of product information
       | in the entire engineering team. If they also do customer triage,
       | then probably in the entire company.
       | 
       | They should know the product inside out, moreover, they know all
       | the annoying bits that are unsexy and not actively developed.
       | 
       | yes, they find bugs and junk, but, they know how your product
       | should be used, and the quickest/easiest way to use it. Which are
       | often two different paths.
       | 
       | Bring your QA in the product cycle, ask them what the stuff that
       | pisses them off the most.
       | 
       | They also should be the masters of writing clear and precise
       | instructions, something devs and product owners could learn from.
        
       | guhcampos wrote:
       | " To these folks, it feels like giving a damn is a huge career
       | liability in your organization. Because it is."
       | 
       | And it's easy to see why.
       | 
       | Software Quality, Cose Maintainability, Good Design. These things
       | only matter if you are planning to work on that company for a
       | long time. If you're planning to stay a couple years then hop to
       | the next company, the most optimal path is to rise fast by doing
       | high visibility work, then find use your meteoric rise as a
       | resume material to get a higher paying job. Rinse and repeat. If
       | that project is going to break or become unmaintainable in a
       | couple years, who cares? You're not going to be there.
       | 
       | Recognize the pattern? Startups work the same. It's the "growth
       | mindset" imprinted everywhere. If this product becomes
       | unmaintainable in 5 years, who cares? I will have exited and
       | cashed in.
       | 
       | I don't judge people who do that exactly because it's the
       | practice the companies themselves use. I don't like it, I
       | actually hate it, but I understand people are just playing by the
       | rules.
       | 
       | The fun part is watching managers and executives complaining
       | about employee turnover, lack of company engagement, quiet
       | quitting, like this isn't them tasting their own poison.
        
         | BoxFour wrote:
         | > Startups work the same. If this product becomes
         | unmaintainable in 5 years, who cares?
         | 
         | This is a reasonable stance for a startup to take. The majority
         | of startups likely won't last five years as they tend to fail.
         | 
         | Being alive in five years with technical debt is a good problem
         | for most startups to have, because that means they managed to
         | make it five years.
        
       ___________________________________________________________________
       (page generated 2023-12-14 23:00 UTC)