[HN Gopher] Ask HN: How do you keep track of software requiremen...
       ___________________________________________________________________
        
       Ask HN: How do you keep track of software requirements and test
       them?
        
       I'm a junior dev that recently joined a small team which doesn't
       seem to have much with regards to tracking requirements and how
       they're being tested, and I was wondering if anybody has
       recommendations.  Specifically, I would like to track what the
       requirements/specifications are, and how we'll test to make sure
       they're met? Which I don't know if this could be a mix of unit and
       integration/regression tests? Honestly though if this is maybe even
       the wrong track to take, I'd appreciate feedback on what we could
       be doing instead.  I used IBM Rational DOORS at a previous job and
       thought it really helped for this, but with a small team I don't
       think it's likely they'll spring for it. Are there open source
       options out there, or something else that's easy? I thought we
       could maybe keep track in a spreadsheet (this to match DOORS?) or
       some other file, but I'm sure there would be issues with that as we
       added to it. Thanks for any feedback!
        
       Author : lovehatesoft
       Score  : 139 points
       Date   : 2022-04-19 13:55 UTC (9 hours ago)
        
       | a_c wrote:
       | It depends where you are in your career and what the industry at
       | the time offers.
       | 
       | For requirement, use any kind of issue tracker and connect your
       | commit with issues. Jira, people here hate it for various reason.
       | But it get the job done. Otherwise GitHub issue would (there are
       | problems with GitHub issues, e.g. cross repo issue tracking in a
       | single place. That's another story)
       | 
       | For QA, you want your QA be part of the progress tracking and
       | have it reflected in Jira/GitHub commit.
       | 
       | One thing I think is of equal importance, if not more, is how the
       | code you delivered is used in the wild. Some sort of analytics.
       | 
       | Zoom out a bit, requirement is what you THINK the user want. QA
       | is about whether your code CAN perform what you think the user
       | want plus some safeguard. Analytics is how the user actually
       | perform in real world
       | 
       | A bit off topic here, QA and analytics is really two side of the
       | same coin. Yet people treat it as two different domains, two set
       | of tools. On one hand, the requirement is verified manually
       | through hand crafted test cases. On the other hand, production
       | behavioural insight is not transformed into future dev/test cases
       | effectively. It is still done manually, if any.
       | 
       | Think about how many time a user wander into a untested undefined
       | interaction that escalated into a support ticket. I'm building a
       | single tool to bridge the gap between product(requirement and
       | production phase) and quality (testing)
        
       | PebblesHD wrote:
       | Given the large, monolithic legacy nature of our backend, we use
       | a combination of JIRA for feature tracking and each story gets a
       | corresponding functional test implemented in CucumberJS, with the
       | expectation that once a ticket is closed as complete, it is
       | already part of 'the test suite' we run during releases.
       | Occasionally the tests flake, it's all just webdriver under the
       | hood, so they require maintenance, but to cover the entire
       | codebase with manual tests even if well documented would take
       | days, so this is by far our preferred option.
        
         | PebblesHD wrote:
         | As a bonus, we run the suite throughout the day as a sort of
         | canary for things breaking upstream, which we've found to be
         | almost as useful as our other monitoring as far as signalling
         | failures.
        
       | adrianmsmith wrote:
       | I think it's important to keep requirements in Git along with the
       | source code. That way when you implement a new feature you can
       | update the requirements and commit it along with the code
       | changes. When the PR is merged, code and requirements both get
       | merged (no chance to forget to update e.g. a Confluence
       | document). Each branch you check out is going to have the
       | requirements that the code in that branch is supposed to
       | implement.
       | 
       | For simple microservice-type projects I've found a .md file, or
       | even mentioning the requirements in the main README.md to be
       | sufficient.
       | 
       | I think it's important to track requirements over the lifetime of
       | the project. Otherwise you'll find devs flip-flopping between
       | different solutions. E.g. in a recent project we were using an
       | open-source messaging system but it wasn't working for us so we
       | moved to a cloud solution. I noted in the requirements that we
       | wanted a reliable system, and cost and cloud-independence wasn't
       | an important requirement. Otherwise, in two years if I'm gone and
       | a new dev comes on board, they might ask "why are we using
       | proprietary tools for this, why don't we use open source" and
       | spend time refactoring it. Then two years later when they're gone
       | a new dev comes along "this isn't working well, why aren't we
       | using cloud native tools here"....
       | 
       | Also important to add things that _aren 't_ requirements, so that
       | you can understand the tradeoffs made in the software. (In the
       | above case, for example, cost wasn't a big factor, which will
       | help future devs understand "why didn't they go for a cheaper
       | solution?")
       | 
       | Also, if there's a bug, is it even a bug? How do you know if you
       | don't know what the system is supposed to do in the first place?
       | 
       | Jira tickets describe individual changes to the system. That's
       | fine for a brand new system. But after the system is 10 years
       | old, you don't want to have to go through all the tickets to work
       | out what the current desired state is.
        
         | asabla wrote:
         | I really like this idea.
         | 
         | However, what would be missing from this is discussions for
         | each requirement specified. Or would you want to include that
         | as well?
         | 
         | It would be nice having a dedicated directory for requirements,
         | src, infra, tests and docs. Which would make things easier to
         | track over long period of time I think
        
       | oumua_don17 wrote:
       | Gitlab does have requirements management integrated but it's not
       | part of the free tier.
       | 
       | [1] https://docs.gitlab.com/ee/user/project/requirements/
        
         | onion2k wrote:
         | I just tried that feature. I added a requirement. I could only
         | add a title and description, which wasn't great. The
         | requirement appeared in the Issues list, which was a bit odd,
         | and when I closed the issue the requirement disappeared from
         | the requirements list.
         | 
         | Whatever that feature is meant to be, it definitely isn't
         | requirements management. Requirements don't stop being
         | requirements after you've written the code.
        
           | boleary-gl wrote:
           | GitLab employee here - we list Requirements Management as at
           | "minimal" maturity. I'm sure the team would love to hear more
           | about why the feature didn't work for you - you can learn
           | more about the direction and the team here: https://about.git
           | lab.com/direction/plan/certify/#requirement...
        
       | pluc wrote:
       | JIRA? Confluence? * _ducks_ *
        
       | throwawayForMe2 wrote:
       | If you are interested in a formal approach, Sparx Enterprise
       | Architect is relatively inexpensive, and can model requirements,
       | and provide traceability to test cases, or anything else you want
       | to trace.
        
       | csours wrote:
       | You may be aware of this, but this is as much a social/cultural
       | discussion as it is a technical discussion.
       | 
       | Regarding requirements - they are always a live discussion, not
       | just a list of things to do. Do not be surprised when they
       | change, instead plan to manage how they change.
       | 
       | Regarding testing - think of testing as headlights on a car; they
       | show potential problems ahead. Effectively all automated testing
       | is regression testing. Unit tests are great for future developers
       | working on that codebase, but no amount of unit tests will show
       | that a SYSTEM works. You also need integration and exploratory
       | testing. This isn't a matter of doing it right or wrong, it's a
       | matter of team and technical maturity.
       | 
       | A bug is anything that is unexpected to a user. I'm sure this
       | will be controversial, and I'm fine with that.
        
       | flyingfences wrote:
       | In a safety-critical industry, requirements tracking is very
       | important. At my current employer, all of our software has to be
       | developed and verified in accordance with DO-178 [0]. We have a
       | dedicated systems engineering team who develop the system
       | requirements from which we, the software development team,
       | develop the software requirements; we have a dedicated software
       | verification team (separate from the development team) who
       | develop and execute the test suite for each project. We use
       | Siemens's Polarion to track the links between requirements, code,
       | and tests, and it's all done under the supervision of an in-house
       | FAA Designated Engineering Representative. Boy is it all tedious,
       | but there's a clear point to it and it catches all the bugs.
       | 
       | [0] https://en.wikipedia.org/wiki/DO-178C
        
         | alexfromapex wrote:
         | Just wanted to ask, this pretty much ensures you're doing
         | waterfall development, as opposed to agile, right?
        
           | orangepurple wrote:
           | If builders built buildings the way programmers write
           | programs, then the first woodpecker that came along would
           | destroy civilization. ~ Gerald Weinberg (1933-10-27 age:84)
           | Weinberg's Second Law
           | 
           | https://www.mindprod.com/jgloss/unmain.html
        
             | dragonwriter wrote:
             | > If builders built buildings the way programmers write
             | programs, then the first woodpecker that came along would
             | destroy civilization.
             | 
             | If builders built buildings the way programmers write
             | programs, we'd have progressed from wattle-and-daub through
             | wood and reinforced concrete to molecular nanotechnology
             | construction in the first two generations of humans
             | building occupied structures.
             | 
             | Bad analogy is bad because programs and buildings aren't
             | remotely similar or comparable.
        
               | teekert wrote:
               | Still I feel like your analogy is the better one, things
               | are moving very fast. With declarative infra and
               | reproducible builds you're pumping out high quality, well
               | tested buildings at record speeds.
        
               | spaetzleesser wrote:
               | On that path a lot of people would have died due to
               | building collapses and fires though.
        
             | ako wrote:
             | Programmers don't build, they design. It's more akind to
             | what building architects do in a cad program. They go
             | through many iterations and changing specs.
        
           | airbreather wrote:
           | Actually most functional safety projects use the v-model (or
           | similar, topography can vary a little as to needs), which is
           | waterfall laid out a slightly different way to more clearly
           | show how verification and validation closes out all the way
           | back to requirements with high degrees of traceabilty.
           | 
           | I've always wanted to break that approach for something a
           | little more nimble, probably by use of tools - but I can't
           | see agile working in functional safety without some very
           | specific tools to assist, which I am yet to see formulated
           | and developed for anything at scale. Also, there are key
           | milestones where you really need to have everything resolved
           | before you start next phase, so maybe sprints, dunno.
           | 
           | The thing about doing waterfall/v-model is if done correctly
           | there is little chance you get to the final Pre-Start Safety
           | Review/FSA 3, or whatever you do before introducing the
           | hazard consequences to humans, and a flaw is discovered that
           | kicks you back 6 or 12 months in the
           | design/validation/verification process. This, while everyone
           | else stands around and waits because they are ready and their
           | bits are good to go, and now you are holding them all up. Not
           | a happy day if that occurs.
           | 
           | FS relies on high degree of traceability and testing the
           | software as it will be used (as best possible), in it's
           | entirety.
           | 
           | So not sure how agile could work in this context, or at least
           | past the initial hazard and risk/requirements definition life
           | cycle phases.
           | 
           | FS is one of things where your progress that you can claim is
           | really only as far as your last lagging item in the
           | engineering sequence of events. The standard expects you to
           | close out certain phases before moving onto subsequent ones.
           | In practice it's a lot messier than that unless extreme
           | discipline is maintained.
           | 
           | (To give an idea of how messy it can get in reality, and how
           | you got to try and find ways to meet the traceability
           | expectations, sometimes in retrospect - last FS project I was
           | responsible for design we were 2.5 years in and still waiting
           | for the owner to issue us their safety requirements. We had
           | to run on a guess and progress speculatively. Luckily we were
           | 95%+ correct with our guesses when reconciled against what
           | finally arrived for requirements)
           | 
           | But, normally racing ahead on some items is a little
           | pointless and likely counterproductive, unless just
           | prototyping a proof of concept system/architecture, or
           | similar activity. You just end up repeating work and then you
           | also have extra historical info floating around and there's
           | possibility that some thing that was almost right but no
           | longer current gets sucked into play etc etc etc. Doc control
           | and revision control is always critical.
           | 
           | Background: I am a TUV certified FS Eng, I have
           | designed/delivered multiple safety systems, mainly to IEC
           | 61511 (process) or IEC 62061 (machinery).
        
             | arthurcolle wrote:
             | what does functional safety mean in the context you are
             | talking about? like fighter jets? or what?
        
           | gotstad wrote:
           | You don't have too, but it is very common to fall into the
           | trap.
           | 
           | If working within a safety-critical industry and wanting to
           | do Agile, typically you'll break down high-level requirements
           | into sw requirements while you are developing,
           | closing/formalizing the requirements just moments before
           | freezing the code and technical file / design documentation.
           | 
           | It's a difficult thing to practice agile in such an industry,
           | because it requires a lot of control over what the team is
           | changing and working on, at all times, but it can be done
           | with great benefits over waterfall as well.
        
           | maerF0x0 wrote:
           | Not sure about how parent concretely operates. But there's no
           | reason you cannot do Agile this way.
           | 
           | Agile iteration is just as much about how you carve up work
           | as how you decide what to do next. For example you could
           | break up a task into cases it handles.
           | 
           | > WidgetX handles foobar in main case
           | 
           | > WidgetX handles foobar when exception case arises (More
           | Foo, than Bar)
           | 
           | > WidgetX works like <expected> when zero WidgetY present
           | 
           | Those could be 3 separate iterations on the same software,
           | fully tested and integrated individually, and accumulated
           | over time. And the feedback loop could come internally as in
           | "How does it function amongst all the other requirements?",
           | "How is it contributing to problems achieving that goal?"
        
             | airbreather wrote:
             | For safety system software most people I know would be very
             | nervous (as in, I'm outta here) about testing software
             | components and then not testing the end result as a whole,
             | just too many possible side effects could come into play,
             | including system wide things that only reveal themselves
             | when the entire program is complete and loaded/running.
             | 
             | What you describe already occurs to some extent in the
             | process and machinery safety sector, where specialised PLC
             | programming languages are used - there is a type of
             | graphical coding called Function Block, where each block
             | can be a re-useable function encapsulated inside a block
             | with connecting pins on the exterior. eg a two out of three
             | voting scheme with degraded voting and MOS function
             | available
             | 
             | The blocks are tested, or sometimes provided as a type of
             | firmware by the PLC vendor, and then deployed in the
             | overall program with expectation inside the block is known
             | behavior, but before shipping, the entire program is tested
             | at FAT.
             | 
             | Depending on the type of safety system you are building,
             | and the hazards it protects against, there is potentially
             | the expectation from the standards that every possible
             | combination of inputs is tested, along with all foreseeable
             | (and sometimes unexpected) mis-use of the machine/process.
             | 
             | In reality that's not physically achievable in any real
             | time available for some systems, so you have to make
             | educated guesses where the big/important problems might
             | hide, fuzz etc, but the point is you aren't going to test
             | like that until you think your system development is 100%
             | complete and no more changes are expected.
             | 
             | And if you test and need to make any significant changes
             | due to testing outcomes or emergent requirements, then you
             | are potentially doing every single one of those tests
             | again. At very least a relevant subset plus some randoms.
             | 
             | Background: I am registered TUV FS Eng and design/deliver
             | safety systems.
             | 
             | It's a whole different game, across the multi year span of
             | a project you might in some cases literally average less
             | than one line of code a day, 95%+ of work is not writing
             | code, just preparing to, and testing.
        
               | DyslexicAtheist wrote:
               | to reiterate on parents endorsement for agile and the
               | point that you seem to be taking issue with: nothing in
               | Agile says you can't run final acceptance tests or
               | integration tests before shipping.
               | 
               | we have done this in quite a couple of companies where
               | things like functional safety or other requirements had
               | to be met. agile sadly gets a bad rep (as does devops)
               | for the way it is rolled out in its grotesque perverted
               | style in large orgs (wagile etc that are nothing but a
               | promise to useless middle/line managers in large orgs not
               | to fire them, or "dev(sec)ops" being condensed into a job
               | title - if that is you, shoot your managers!).
               | 
               | if you increase test automation and get better visibility
               | into risks already during the requirements management
               | phase (e.g. probably you're doing D/FMEA already?) then
               | nothing stops you from kicking these lazy-ass firmware
               | hardware engineers who are scared of using version
               | control or jenkins to up their game, and make your org
               | truly "agile"). Obviously it's not a technical problem
               | but a people problem (to paraphrase Gerald M. Weinstein)
               | and so every swinging dick will moan about Agile not
               | being right for them or DevOps not solving their issues,
               | while in reality we (as an industry) are having the same
               | discussion since the advent of eXtreme programming, and
               | I'm so tired of it I want to punch every person who
               | invites an Agile coach simply for not having the
               | balls/eggs to say the things everyone already knows, it's
               | infuriating to the point I want to just succumb to hard
               | drugs.
        
           | flyingfences wrote:
           | Big waterfalls, yes.
        
             | pantulis wrote:
             | And... is your team consistently hitting the estimated
             | product delivery schedules? (honest question)
        
           | postingposts wrote:
           | Waterfall and Agile are tools. If you need to hang a photo, a
           | hammer and a nail. Cut down a tree? Maybe not the hammer and
           | the nail.
        
             | karmakaze wrote:
             | Could you use both to good effect? Waterfall to make a
             | plan, schedule, and budget. Then basically disregard all
             | that and execute using Agile and see how you fare. Of
             | course there would be a reckoning as you would end up
             | building the system they want rather than what was spec'd
             | out.
        
               | gotstad wrote:
               | You could. You might even say it's difficult to make any
               | project estimate without your plan being waterfall.
               | Planning and execution are deliberately two very
               | different things, and convincing the customer - or the
               | steering committee of that - is key to a good product.
        
           | spaetzleesser wrote:
           | You can and will make changes on the way but every change is
           | extremely expensive so it's better to keep changes low.
        
           | nonameiguess wrote:
           | Waterfall is a great methodology where warranted. It ensures
           | you're doing things in a principled, predictable, repeatable
           | manner. We see all this stuff lamenting about and trying to
           | implement reproducibility in science and build systems, yet
           | seem to embrace chaos in certain types of engineering
           | practices.
           | 
           | We largely used waterfall in GEOINT and I think it was a
           | great match and our processes started to break down and fail
           | when the government started to insist we embrace Agile
           | methodologies to emulate commercial best practices. Software
           | capabilities of ground processing systems are at least
           | somewhat intrinsically coupled to the hardware capabilities
           | of the sensor platforms, and those are known and planned
           | years in advance and effectively immutable once a vehicle is
           | in orbit. The algorithmic capabilities are largely dictated
           | by physics, not by user feedback When user feedback is
           | critical, i.e. UI components, by all means, be Agile. But if
           | you're developing something like the control software for a
           | thruster system, and the physical capabilities and
           | limitations of the thruster system are known in advance and
           | not subject to user feedback, use waterfall. You have hard
           | requirements, so don't pretend you don't.
        
             | null_shift wrote:
             | Even with "hard" requirements in advance, things are always
             | subject to change, or unforeseen requirements
             | additions/modifications will be needed.
             | 
             | I don't see why you can't maintain the spirit of agile and
             | develop iteratively while increasing fidelity, in order to
             | learn out these things as early as possible.
        
         | splitstud wrote:
        
         | jmyeet wrote:
         | Well... the 737MAX seems to suggest it doesn't catch _all_ the
         | bugs.
        
           | markdown wrote:
           | AFAIK the bugs were caught, known about, and deliberately
           | ignored. In fact even when the bug caused a fatal error that
           | brought an instance crashing (to the ground, literally!), it
           | was ignored both by Boeing and the US government.
        
       | T3RMINATED wrote:
        
       | theptip wrote:
       | One framework that is appealing but requires organizational
       | discipline is Acceptance Testing with Gherkin.
       | 
       | The product owner writes User Stories in a specific human-and-
       | machine readable format (Given/when/then). The engineers build
       | the features specified. Then the test author converts the
       | "gherkin" spec into runnable test cases. Usually you have these
       | "three amigos" meet before the product spec is finalized to agree
       | that the spec is both implementable and testable.
       | 
       | You can have a dedicated "test automation" role or just have an
       | engineer build the Acceptance Tests (I like to make it someone
       | other than building the feature so you get two takes on
       | interpreting the spec). You keep the tester "black-box" without
       | knowing the implementation details. At the end you deliver both
       | tests and code, and if the tests pass you can fee pretty
       | confident that the happy path works as intended.
       | 
       | The advantage with this system is product owners can view the
       | library of Gherkin specs to see how the product works as the
       | system evolves. Rather than having to read old spec documents,
       | which could be out of date since they don't actually get
       | validated against the real system.
       | 
       | A good book for this is "Growing Object-Oriented Software, Guided
       | by Tests" [1], which is one of my top recommendations for junior
       | engineers as it also gives a really good example of OOP
       | philosophy.
       | 
       | The main failure mode I have seen here is not getting buy-in from
       | Product, so the specs get written by Engineering and never viewed
       | by anyone else. It takes more effort to get the same quality of
       | testing with Gherkin, and this is only worthwhile if you are
       | reaping the benefit of non-technical legibility.
       | 
       | All that said, if you do manual release testing, a spreadsheet
       | with all the features and how they are supposed to work, plus a
       | link to where they are automatically tested, could be a good
       | first step if you have high quality requirements. It will be
       | expensive to maintain though.
       | 
       | 1: https://smile.amazon.com/Growing-Object-Oriented-Software-
       | Ad...
        
       | clintonb wrote:
       | 1. Start with a product/project brief that explains the who, why,
       | and what if the project at a high level to ensure the business is
       | aligned.
       | 
       | 2. Architecture and design docs explain the "how" to engineering.
       | 
       | 3. The work gets broken down to stories and sub-tasks and added
       | to a Scrum/Kanban board. I like Jira, but have also used Asana
       | and Trello.
       | 
       | Testing is just another sub-task, and part of the general
       | definition of some for a story. For larger projects, a project-
       | specific test suite may be useful. Write failing tests. Once they
       | all pass, you have an indication that the project is nearly done.
       | 
       | You can skip to #3 if everyone is aligned on the goals and how
       | you'll achieve them.
        
       | oddeyed wrote:
       | In a small team, I have found that a simple spreadsheet of tests
       | can go a long way. Give it a fancy name like "Subcomponent X
       | Functional Test Specification" and have one row per requirement.
       | Give them IDs (e.g. FNTEST0001).
       | 
       | What sort of tests you want depends a lot on your system. If
       | you're working on some data processing system where you can
       | easily generate many examples of test input then you'll probably
       | get lots of ROI from setting up lots of regression tests that
       | cover loads of behaviour. If it's a complex system involving
       | hardware or lots of clicking in the UI then it can be very good
       | to invest in that setup but it can be expensive in time and cost.
       | In that case, focus on edge or corner cases.
       | 
       | Then in terms of how you use it, you have a few options depending
       | of the types of test:
       | 
       | - you can run through the tests manually every time you do a
       | release (i.e. manual QA) - just make a copy of the spreadsheet
       | and record the results as you go and BAM you have a test report
       | 
       | - if you have some automated tests like pytest going on, then you
       | could use the mark decorator and tag your tests with the
       | functional test ID(s) that they correspond to, and even generate
       | a HTML report at the end with a pass/fail/skip for your
       | requirements
        
         | petepete wrote:
         | This is where Cucumber is great.
         | 
         | I know it doesn't get much love on here, but a feature per
         | requirement is a good level to start at. I'd recommend using
         | `Examples` tables for testing each combination.
         | 
         | Having your features run on every PR is worth its weight in
         | gold, and being able to deal with variations in branches
         | relieves most of the headaches from having your requirements
         | outside of the repo.
        
       | sz4kerto wrote:
       | What we do:
       | 
       | - we track work (doesn't matter where), each story has a list of
       | "acceptance criteria", for example: 'if a user logs in, there's a
       | big red button in the middle of the screen, and if the user
       | clicks on it, then it turns to green'
       | 
       | - there's one pull request per story
       | 
       | - each pull request contains end-to-end (or other, but mostly
       | e2e) tests that prove that all ACs are addressed, for example the
       | test logs in as a user, finds the button on the screen, clicks
       | it, then checks whether it turned green
       | 
       | - even side effects like outgoing emails are verified
       | 
       | - if the reviewers can't find tests that prove that the ACs are
       | met, then the PR is not merged
       | 
       | - practically no manual testing as anything that a manual tester
       | would do is likely covered with automated tests
       | 
       | - no QA team
       | 
       | And we have a system that provides us a full report of all the
       | tests and links between tests and tickets.
       | 
       | We run all the tests for all the pull requests, that's currently
       | something like 5000 end-to-end test (that exercise the whole
       | system) and much more other types of tests. One test run for one
       | PR requires around 50 hours of CPU time to finish, so we use
       | pretty big servers.
       | 
       | All this might sound a bit tedious, but this enables practically
       | CICD for a medical system. The test suite is the most complete
       | and valid specification for the system.
       | 
       | (we're hiring :) )
        
         | Foobar8568 wrote:
         | Dream board for any projects. One PR per PBI/US is already hard
         | to make people understand this or that we/they shouldn't start
         | working on a PBI/US without acceptance criteria.
         | 
         | After I am unsure of the whole "testing part" especially
         | running all the tests for each PR for typical projects..
        
         | [deleted]
        
       | scottyah wrote:
       | We use an issue tracking system like Jira, Trello, Asana, etc and
       | each "ticket" is a unique identifier followed by a brief
       | description. You can add all other sorts of labels, descriptions,
       | etc to better map to the requirements you get. Next, all git
       | branches are named the exact same way as the corresponding
       | ticket. Unit tests are created under the same branch. After
       | getting PR'd in, the code and unit tests can always be matched up
       | to the ticket and therefore the requirement. For us, this system
       | is good enough to replace the usual plethora of documentation the
       | military requires. It does require strict following that can take
       | extra time sometimes, but all devs on my team prefer it to
       | writing more robust documentation.
       | 
       | Another useful tool to use in conjunction to the above is running
       | code coverage on each branch to ensure you don't have new code
       | coming in that is not covered by unit tests.
        
       | softwaredoug wrote:
       | Even in the strictest settings, documentation has a shelf life. I
       | don't trust anything that's not a test.
        
       | lotyrin wrote:
       | When it's technically feasible, I like every repo having along
       | side it tests for the requirements from an external business
       | user's point of view. If it's an API then the requirements/tests
       | should be specified in terms of API, for instance. If it's a UI
       | then the requirements should be specified in terms of UI. You can
       | either have documentation blocks next to tests that describe
       | things in human terms or use one of the DSLs that make the terms
       | and the code the same thing if you find that ergonomic for your
       | team.
       | 
       | I like issue tracking that is central to code browsing/change
       | request flows (e.g. Github Issues). These issues can then become
       | code change requests to the requirements testing code, and then
       | to the implementation code, then accepted and become part of the
       | project. As products mature, product ownership folks _must_
       | periodically review and prune existing requirements they no
       | longer care about, and devs can then refactor as desired.
       | 
       | I don't like over-wrought methodologies built around external
       | issue trackers. I don't like tests that are overly-concerned with
       | implementation detail or don't have any clear connection to a
       | requirement that product ownership actually cares about. "Can we
       | remove this?" "Who knows, here's a test from 2012 that needs
       | that, but no idea who uses it." "How's the sprint board looking?"
       | "Everything is slipping like usual."
        
         | splitstud wrote:
        
       | jrowley wrote:
       | At my work we've needed a QMS and requirements traceability. We
       | first implemented it in google docs via AODocs. Now we've moved
       | to Jira + Zephyr for test management + Enzyme. I can't say I
       | recommend it.
        
       | drewcoo wrote:
       | > I would like to track what the requirements/specifications are,
       | and how we'll test to make sure they're met
       | 
       | Why? Why would you like that? Why you?
       | 
       | If it's not happening, the business doesn't care. Your company is
       | clearly not in a tightly regulated industry. What does the
       | business care about? Better to focus on that instead of
       | struggling to become a QA engineer when the company didn't hire
       | you for that.
       | 
       | Generally, if the team wants to start caring about that, agree
       | to:
       | 
       | 1. noting whatever needs to be tested in your tracker
       | 
       | 2. writing tests for those things alongside the code changes
       | 
       | 3. having code reviews include checking that the right tests were
       | added, too
       | 
       | 4. bonus points for making sure code coverage never drops (so no
       | new untested code was introduced)
        
       | mytailorisrich wrote:
       | It's very useful to keep track of changes and to be able to have
       | text to describe and explain, so for me the simplest tool would
       | not be to use a spreadsheet but to create a git repo and to have
       | one file per requirement, which can be grouped into categories
       | through simple folders. You can still have a spreadsheet as top
       | level to summarise as long as you remember to keep it up to date.
       | 
       | Top-level requirements are system requirements and each of them
       | should be tested through system tests. This usually then drips
       | through the implementation layers from system tests to
       | integration tests, to unit tests.
       | 
       | Regression testing really is just running your test suite every
       | time something changes in order to check that everything still
       | works fine.
        
         | heresie-dabord wrote:
         | Agreed, monstrous and expensive project-management software
         | isn't necessary.
         | 
         | git to manage the graph, grep to search the graph, and run a
         | Python http server in the directory if you want to share.
        
       | uticus wrote:
       | Since you mention you're a junior dev, wanted to suggest taking
       | the long road and (1) listening to what others say (you're
       | already doing that by asking here, but don't overlook coworkers
       | much closer to you) and (2) start reading on the subject. Might I
       | suggest Eric Evans "Domain-driven design" as a starting point,
       | and don't stop there? Reading is not a quick easy path, but you
       | will benefit from those that have gone before you.
       | 
       | Of course, don't make the mistake I am guilty of sometimes
       | making, and think you now know better than everyone else just
       | because you've read some things others have not. Gain knowledge,
       | but stay focused on loving the people around you. ("Loving"
       | meaning in the Christian sense of respect, not being selfish,
       | etc; sorry if that is obvious)
        
       | rubicon33 wrote:
       | You should probably first assess whether or not your organization
       | is open to that kind of structure. Smaller companies sometimes
       | opt toward looser development practices since it's easier to know
       | who did what, and, the flexibility of looser systems is nice.
       | 
       | TLDR adding structure isn't always the answer. Your team/org
       | needs to be open to that.
        
       | corpMaverick wrote:
       | Let the product owner (PO) handle them.
       | 
       | The PO has to make the hard decision about what to work on and
       | when. He/She must understand the product deeply and be able to
       | make the hard decisions. Also the PO should be able to test the
       | system to accept the changes.
       | 
       | Furthermore. You don't really need to have endless lists of
       | requirements. The most important thing to know is what is the
       | next thing that you have to work on.
        
         | clavalle wrote:
         | This is a LOT to put on a PO. I hope they have help.
        
           | sumedh wrote:
           | This is why you need a QA
        
         | uticus wrote:
         | This actually has a nugget of wisdom. I wish I was more open to
         | soaking up wisdom - and less likely to argue a point - when I
         | was a junior dev. Or still now, really.
         | 
         | Moreover, if your PO can't define the goals, and what needs to
         | be tested to get there, well you have a problem. Assuming the
         | team is committed to some form of Agile and you have such a
         | thing as a PO.
         | 
         | However, I also disagree with the main thrust of this comment.
         | A PO should have responsibility, sure. But if that gets
         | translated into an environment where junior devs on the team
         | are expected to not know requirements, or be able to track
         | them, then you no longer have a team. You have a group with
         | overseers or minions.
         | 
         | There's a gray area between responsibility and democracy. Good
         | luck navigating.
        
           | michaelt wrote:
           | _> Moreover, if your PO can 't define the goals, and what
           | needs to be tested to get there, well you have a problem._
           | 
           | In some work environments, there may be unspoken
           | requirements, or requirements that the people who want the
           | work done don't know they have.
           | 
           | For example, in an online shopping business the head of
           | marketing wants to be able to allocate a free gift to every
           | customer's first order. That's a nice simple business
           | requirement, clearly expressed and straight from the user's
           | mouth.
           | 
           | But there are a bunch of other requirements:
           | 
           | * If the gift item is out of stock, it should _not_ appear as
           | a missing item on the shipping manifest
           | 
           | * If every other item is out of stock, we should not send a
           | shipment with only the gift.
           | 
           | * If we miss the gift from their first order, we should
           | include it in their second order.
           | 
           | * The weight of an order _should not_ include the gift when
           | calculating the shipping charge for the customer, but
           | _should_ include it when printing the shipping label.
           | 
           | * If the first order the customer places is for a backordered
           | item, and the second order they place will arrive before
           | their 'first' order, the gift should be removed from the
           | 'first' order and added to the 'second' order, unless the
           | development cost of that feature is greater than $3000 in
           | which case never mind.
           | 
           | * The customer should not be charged for the gift.
           | 
           | * If the gift item is also available for paid purchase,
           | orders with a mix of gift and paid items should behave
           | sensibly with regard to all the features above.
           | 
           | * Everything above should hold true even if the gift scheme
           | is ended between the customer checking out and their order
           | being dispatched.
           | 
           | * The system should be secure, not allowing hackers to get
           | multiple free gifts, or to get arbitrary items for free.
           | 
           | * The software involved in this should not add more than,
           | say, half a second to the checkout process. Ideally a lot
           | less than that.
           | 
           | Who is responsible for turning the head of marketing's broad
           | requirement into that list of many more, much narrower
           | requirements?
           | 
           | Depending on the organisation it could be a business analyst,
           | a product owner, a project manager, an engineer as part of
           | planning the work, an engineer as part of the implementation,
           | or just YOLO into production and wait for the unspoken
           | requirements to appear as bug reports.
        
             | ibejoeb wrote:
             | > there may be unspoken requirements, or requirements that
             | the people who want the work done don't know they have
             | 
             | That is just restating the problem that the "PO can't
             | define the goals."
             | 
             | It's a bigger problem in the industry. Somehow, the Agile
             | marketing campaign succeeded, and now everyone is Agile,
             | regardless of whether the team is following one of the
             | myriad paradigms.
             | 
             | I can rattle off dozens of orgs doing Scrum, but maybe 1 or
             | 2 that actually are. Maybe doing two weeks of work and
             | calling it a sprint, then doing another two weeks of
             | work...and so on. No defined roles. It's just a badge word
             | on the company's culture page.
             | 
             | The companies that are really doing something Agile are the
             | consultancies that are selling an Agile process.
        
         | lovehatesoft wrote:
         | That would be nice, and maybe I should have clarified why I
         | asked the question. I was asked to add a new large feature, and
         | some bugs popped up along the way. I thought better testing
         | could have helped, and then I thought it would possibly help to
         | list the requirements as well so I can determine which tests to
         | write/perform. And really I thought I could have been writing
         | those myself - PO tells me what is needed generally, I try to
         | determine what's important from there.
         | 
         | Or maybe I just need to do better testing myself? There's no
         | code reviews around here, or much of an emphasis on writing
         | issues, or any emphasis on testing that I've noticed. So it's
         | kind of tough figuring out what I can do
        
         | ruh-roh wrote:
         | This is good advice for multiple reasons.
         | 
         | One I haven't seen mentioned yet - When Product is accountable
         | & responsible for testing the outputs, they will understand the
         | effort required and can therefore prioritize investment in
         | testable systems and associated test automation.
         | 
         | When those aspects are punted over to architects/developers/QA,
         | you'll end up in a constant battle between technical testing
         | investments and new features.
        
         | deathanatos wrote:
         | I don't disagree with you. In fact, I think it's just a
         | restatement of the PO's job description.
         | 
         | But POs who are technical enough to understand the system to
         | understand what the requirements of the system are,
         | empirically, unicorns.
        
       | superjan wrote:
       | We have Word documents for requirements and (manual) test cases
       | plus a self-written audit tools that checks the links between
       | them, and converts them into hyperlinked and searchable HTML.
       | It's part of the dayly build. We are mostly happy with it. It is
       | nice to know that we at any time can switch to a better tool
       | (after all our requirements have an "API"), but we still have not
       | found a better one.
        
       | syngrog66 wrote:
       | "vi reqs.txt" is ideal default baseline
       | 
       | then only bump up to more complexity or superficiality as the
       | benefits exceeds the cost/pain. for example, a spreasheet.
       | perhaps a Google doc
       | 
       | if you're lucky enough to have any reqs/specs which have a
       | natural machine-friendly form, like an assertion that X shall be
       | <= 100ms then go ahead and express that in a structured way then
       | write test code which confirms it, as part of a suite of
       | assertions of all reqs which can be test automated like this
        
       | airbreather wrote:
       | Depending on how rigorous you want to be, for over $20k a year
       | minimum you can use Medini, but it's pretty hard core.
       | 
       | https://www.ansys.com/products/safety-analysis/ansys-medini-...
        
       | stefanoco wrote:
       | Zooming into "requirements management" (and out of "developing
       | test cases") there's a couple of Open Source projects that
       | address specifically this important branch of software
       | development. I like both approaches and I think they might be
       | used in different situations. By the way, the creators of these
       | two projects are having useful conversations on aspects of their
       | solutions so you might want to try both and see what's leading
       | from your point of view.
       | 
       | * https://github.com/doorstop-dev/doorstop *
       | https://github.com/strictdoc-project/strictdoc
       | 
       | Of course requirements can be linked to test cases and test
       | execution reports, based on a defined and described process.
       | 
       | How to build test cases is another story.
        
       | spaetzleesser wrote:
       | We use a system called Cockpit. It's terrible to say the least.
       | 
       | I have never seen a requirements tracking software that worked
       | well for large systems with lots of parts. Tracing tests to
       | requirements and monitoring requirements coverage is hard. For
       | projects of the size I work on I think more and more that writing
       | a few scripts that work on some JSON files may be less effort and
       | more useful than customizing commercial systems.
        
       | hibikir wrote:
       | There's no good answer here to a question with so little context.
       | What you should be doing, in a company we don't know anything
       | about, could vary wildly.
       | 
       | I've been writing software professional for over 20 years, in all
       | kinds of different industries. I've had to handle thousands of
       | lines of specs, with entire teams of manual testers trying to
       | check them. Worked at places where all requirements were
       | executable, leading to automated test suites that were easily 10
       | times bigger than production code. Other places just hope that
       | the existing code was added for a reason, and at best keep old
       | working tickets. And in other places, we've had no tracking
       | whatsoever, and no tests. I can't say that anyone was wrong.
       | 
       | Ultimately all practices are there to make sure that you produce
       | code that fits the purpose. If your code is an API with hundreds
       | of thousands of implementers, which run billions of dollars a
       | month through it, and you have thousands of developers messing
       | with said API, the controls that you'll need to make sure the
       | code fits purpose is going to be completely different than what
       | you are going to need if, say, you are working on an indie video
       | game with 5 people.
       | 
       | Having to long terms requirements tracking can be very valuable
       | too! A big part of documentation, executable or not, is that it
       | has to be kept up to date, and be valuable: It's a pretty bad
       | feeling to have to deal with tens of thousands of lines of code
       | to support a feature nobody is actually using. Reading
       | documentation that is so out of date that you will end up with
       | the completely wrong idea, and lose more time than if you had
       | spent the same time reading a newspaper. Every control, every
       | process, has its costs along with its advantages, and the right
       | tradeoff for you could have absolutely nothing to do with the
       | right tradeoff somewhere else. I've seen plenty of problems over
       | the years precisely because someone with responsibility changes
       | organizations to a place that is very different, and attempts to
       | follow the procedures that made a lot of sense in the other
       | organization, but are just not well fit for their new
       | destination.
       | 
       | So really, if your new small team is using completely different
       | practices than your previous place, which was Enterprise enough
       | to use any IBM Rational product, I would spend quite a bit of
       | time trying to figure out why your team is doing what they do,
       | make sure that other people agree that the problems that you
       | think you are having are the same other people in the team are
       | seeing, and only then start trying to solve them. Because really,
       | even in a small team, the procedures that might make sense for
       | someone offering a public API, vs someone making a mobile
       | application that is trying to gain traction in a market would be
       | completely different.
        
       | agentultra wrote:
       | Depends on the industry. In most web services, applications, and
       | desktop software shops; you don't. You track them informally
       | through various tests your team may or may not maintain (ugh) and
       | you'll hardly ever encounter any documentation or specification,
       | formal or informal of any kind, ever.
       | 
       | I wish this wasn't the case but it's been the reality in my
       | experience and I've been developing software for 20+ years. I'm
       | the rare developer that will ask questions and _write things
       | down_. And if it seems necessary I will even model it formally
       | and write proofs.
       | 
       | Some industries it is _required_ in some degree. I 've worked in
       | regulated industries where it was required to maintain _Standard
       | Operating Procedures_ documents in order to remain compliant with
       | regulators. These documents will often outline _how_ requirements
       | are gathered, how they are documented, and include forms for
       | signing off that the software version released implements them,
       | etc. There are generally pretty stiff penalties for failing to
       | follow procedure (though for some industries I don 't think those
       | penalties are high enough to deter businesses from trying to cut
       | corners).
       | 
       | In those companies that had to track requirements we used a git
       | repository to manage the documentation and a documentation system
       | generated using pandoc to do things like generate issue-tracker
       | id's into the documentation consistently, etc.
       | 
       | A few enterprising teams at Microsoft and Amazon are stepping up
       | and building tooling that automates the process of checking a
       | software implementation of a formal specification. For them
       | mistakes that lead to security vulnerabilities or missed service
       | level objectives can spell millions of dollars in losses. As far
       | as I'm aware it's still novel and not a lot of folks are talking
       | about it yet.
       | 
       | I consider myself an advocate for formal methods but I wouldn't
       | say that it's a common practice. The opinions of the wider
       | industry about formal methods are not great (and that might have
       | something to do with the legacy of advocates past over-promising
       | and under-delivering). If anything at least ask questions and
       | write things down. The name of the game is to never be fooled.
       | The challenge is that you're the easiest person to fool. Writing
       | things down, specifications and what not, is one way to be
       | objective with yourself and overcome this challenge.
        
       | jcon321 wrote:
       | Gitlab. Just use Issues you can do everything with the free tier.
       | (It's called "Issues workflow" - gitlab goes a little overboard
       | though, but I'd look at pictures of peoples issues list to get
       | examples).
       | 
       | My opinion would be to not use all the fancy features that
       | automatically tie issues to merge requests, releases, epics,
       | pipelines etc... it's way to much for a small team that is not
       | doing any type of management.
       | 
       | Just use some basic labels, like "bug" or "feature" and then use
       | labels to denote where they are in the cycle such as "sprinted",
       | "needs testing" etc. Can use the Boards feature if you want
       | something nice to look at. Can even assign weights and estimates.
       | 
       | You can tie all the issues of a current sprint to a milestone,
       | call the milestone a version or w/e and set a date. Now you have
       | history of features/bugs worked on for a version.
       | 
       | In terms of testing, obviously automated tests are best and
       | should just be a requirement built into every requirement. Some
       | times though tests must be done manually, and in that case attach
       | a word doc or use the comments feature on an issue for the "test
       | plan".
        
         | lovehatesoft wrote:
         | If possible, could I get your opinion on a specific example? In
         | my current situation, I was asked to add a feature which
         | required a few (java) classes. So -
         | 
         | * It seems like this would have been a milestone?
         | 
         | * So then maybe a few issues for the different classes or
         | requirements?
         | 
         | * For each issue, after/during development I would note what
         | tests are needed, maybe in the comments section of the issue?
         | Maybe in the description?
         | 
         | * And then automated tests using junit?
        
       | jtwaleson wrote:
       | This is super interesting and incredibly difficult. In some
       | regulated environments, like medical devices, you MUST keep track
       | of requirements in your product's technical documentation. I work
       | on a Software Medical Device product and have seen tons of
       | workflows at similar companies. There are many different
       | approaches to this and none that I have seen work really well. In
       | my view this field is ripe for disruption and would benefit from
       | standardization and better tooling.
       | 
       | Here are some options that I've seen in practice.
       | 
       | A: put everything in your repository in a structured way:
       | 
       | pros: - consistent - actually used in practice by the engineers
       | 
       | cons: - hard to work with for non-developers - too much detail
       | for audits - hard to combine with documents / e-signatures
       | 
       | B: keep separate word documents
       | 
       | pros: - high level, readable documentation overview - works with
       | auditor workflows - PM's can work with these documents as well
       | 
       | cons: - grows to be inconsistent with your actual detailed
       | requirements - hard to put in a CI/CD pipeline
       | 
       | A whole different story is the level of details that you want to
       | put in the requirements. Too much detail and developers feel
       | powerless, too little detail and the QA people feel powerless.
        
         | Woberto wrote:
         | For option A, how do you put the requirements in the repo?
         | Another user mentioned the possibility of having a "req" folder
         | at the same level of e.g. "src" and "test". Maybe the file
         | structure would match that of the other directories? And what
         | do you use - excel files, word docs, .md files, something else?
        
       | mtoddsmith wrote:
       | We use JIRA along with zypher test plugin which allows you to
       | associate one or more test cases (aka list of steps) with your
       | JIRA ticket. And tracks progress for each test case. Devs create
       | the tickets and our QA creates the test cases. Docs and
       | requirements come from all different departments and in all kinds
       | of different formats so we just include those as links or
       | attachments in the JIRA tickets.
        
       | amarant wrote:
       | I'd go for integration or end-to-end tests, depending on your
       | application. Name each test after a requirement and make sure the
       | test ensures the entirety of that requirement is fulfilled as
       | intended(but avoid testing the implementation).
       | 
       | As an example, you could have a test that calls some public API
       | and checks that you get the expected response. Assuming your
       | requirement cares about the public API, or the functionality it
       | provides.
       | 
       | I've tried to be as detailed as I can without knowing much about
       | your application: assumptions were made, apply salt as needed.
       | 
       | Personally, I like having a test-suite be the documentation for
       | what requirements exist. Removing or significantly modifying a
       | test should always be a business decision. Your local Jira guru
       | will probably disagree
        
       | FrenchyJiby wrote:
       | Having a similar discussion at work recently, I've written in
       | favour of using Gherkin Features to gather high level
       | requirements (and sometimes a bit of specifications), mostly
       | stored in Jira Epics to clarify what's asked.
       | 
       | See the post at https://jiby.tech/post/gherkin-features-user-
       | requirements/
       | 
       | I made this into a series of post about gherkin, where I
       | introduce people to Cucumber tooling and BDD ideals, and show an
       | alternative low-tech for cucumber in test comments.
       | 
       | As for actually doing the tracking of feature->test, aside from
       | pure Cucumber tooling, I recommend people have a look at
       | sphinxcontrib-needs:
       | 
       | https://sphinxcontrib-needs.readthedocs.io/en/latest/index.h...
       | 
       | Define in docs a "requirement" bloc with freeform text (though I
       | put gherkin in it), then define more "specifications", "tests"
       | etc with links to each other, and the tool does the graph!
       | 
       | Combined with the very alpha sphinx-collections, it allows jinja
       | templates from arbitrary data:
       | 
       | Write gherkin in features/ folder, make the template generate for
       | each file under that folder an entry of sphinxcontrib-needs with
       | the gherkin source being quoted!
       | 
       | https://sphinx-collections.readthedocs.io/en/latest/
        
         | drewcoo wrote:
         | I have never met a dev who ever enjoyed Cucumber/Gherkin stuff.
         | There's a lot of decorative overhead to make code look friendly
         | to non-coders. Non-coders who eventually never look at the
         | "pretty" code.
         | 
         | Spec-like BDD tests (RSpec, Jest, Spock, et al. - most
         | languages except Python seem to have a good framework) have all
         | the advantages of forcing behavioral thinking without having to
         | maintain a layer of regex redirects.
        
       | AlbertoAsw10 wrote:
        
       | smoe wrote:
       | For smaller teams/projects I like to have as much of tracking
       | requirements as possible as code because of how hard it is to
       | keep anything written down in natural language up to date and
       | having a useful history of it.
       | 
       | I really like end-to-end tests for this, because it tests the
       | system from a user perspective, which is how many requirements
       | are actually coming in, not how they are implemented internally.
       | I also like to write tests for things that can't actually break
       | indirectly. But it makes it so that someone who changes e.g. some
       | function and thus breaks the test realizes that this is an
       | explicit prior specification that they are about to invalidate
       | and might want to double check with someone.
        
       | alexashka wrote:
       | As a junior dev, this isn't your job.
       | 
       | Your job is to do what is being asked of you and not screw it up
       | too much.
       | 
       | If they wanted to track requirements, they'd already track them.
       | 
       | People have _very_ fragile egos - if you come in as a junior dev
       | and start suggesting shit - they will not like that.
       | 
       | If you come in as a senior dev and start suggesting shit, they'll
       | not like it, unless your suggestion is 'how about I do your work
       | for you on top of my work, while you get most or all of the
       | credit'.
       | 
       | That is the only suggestion most other people are interested in.
       | 
       | Source: been working for a while.
        
         | lovehatesoft wrote:
         | Well so the reason I asked this questions is that I did screw
         | up a bit, and I think it could have been caught had I done
         | sufficient testing - but I didn't because it doesn't seem to be
         | part of the culture here, and neither are peer reviews.
         | 
         | So I _was_ trying to do only what was asked of me, just writing
         | the code, but I guess I thought what I did at my previous job
         | could have helped - which is keeping track of what was needed
         | and then how I planned to accomplish and test.
         | 
         | But yeah, you've got me thinking about how or whether I should
         | broach this topic; I think my lead is great, seems open to
         | ideas, wants things to work well, so maybe I'll just ask what
         | they think about how to avoid these kinds of mistakes.
        
           | philk10 wrote:
           | "give it a quick test and ship it out, our customers are
           | better at finding bugs than we are" - lecture from the CEO of
           | a company I used to work for who didn't want me to waste any
           | time testing and didn't want to pay me to do testing. I left
           | soon after that to find a place with a different culture,
           | trying to change it was way too hard
        
           | DoctorDabadedoo wrote:
           | As a junior dev you shouldn't be able to screw up big time,
           | if you do, that's on the team/company, not on you. As a
           | senior it is trickier, but usually no one should be able to
           | screw up monumentally, if they do it's a lack of internal
           | process, not on the individual (exceptions being malicious
           | intents).
           | 
           | Changing internal processes without being a decision maker
           | inside the company (e.g. an influencial manager/lead, the
           | owner, a vp, etc.) is hard, even if there are clear benefits.
           | If there of things that make no sense, there are no horizons
           | for the improvements to come and you are not learning from
           | your seniors, consider if it makes sense to move forward.
           | Trying to change internal processes at reluctant employers is
           | a common cause of immense frustration (and burnout), don't
           | let yourself get caught into that.
        
           | alexashka wrote:
           | > so maybe I'll just ask what they think about how to avoid
           | these kinds of mistakes
           | 
           | This, 100%.
           | 
           | Don't tell anyone at work you asked on HackerNews and got
           | feedback - they don't want to debate the merits of various
           | approaches. They want it done their way, because it is
           | obviously the right way, or else they would've modified it,
           | right? :)
           | 
           | Most jobs are repetitive, so you eliminate mistakes just by
           | doing it for a while. Hence nothing extra needs to be done,
           | which is exactly how most people like it and why your company
           | has no peer review or much of anything - because it just
           | works, with the least amount of effort, somehow, someway :)
        
         | uticus wrote:
         | Sensing some sarcasm, but I agree there is some wisdom is
         | "keeping your place." Not very popular to say that these days,
         | but boy I wish I took that advice more as a junior dev. Still
         | need to do that more.
         | 
         |  _However_ there is a spectrum, and if it turns from  "listen
         | rather than speak" in a respectful, learning sort of mentality
         | to "shut up and do as I say, no questions", then requirements
         | tools are not going to address the real problems.
         | 
         | In my experience, having requirements and processes and tools
         | being used in a mindful way can be wonderful, but all that
         | pales in comparison with the effectiveness of a well-working
         | team. But that's the human factor and the difficult part.
         | 
         | Source: also been working a while. Seen good teams that were
         | very democratic and also good teams that were very top-heavy
         | militaristic (happy people all around in both scenarios).
        
       | CodeWriter23 wrote:
       | My first suggestion, wait it out for an initial period and see
       | how much the "requirements" align with the results. Based on my
       | experience, about 3/4 of the time those stating the requirements
       | have no idea what they actually want. I can usually increase the
       | odds of the result matching the actual requirements by
       | interviewing users / requirement generators.
       | 
       | Anyway, no point on tracking low-quality requirements that end up
       | being redefined as you build the airplane in flight.
        
         | rsecora wrote:
         | Underrated parent comment.
         | 
         | Requirements are living entities and subject to Darwin rules.
         | Only the fittest survives.
        
       | 5440 wrote:
       | I review software for at least 3-5 companies per week as part of
       | FDA submission packages. The FDA requirements require
       | traceability between reqs and the validation. While many small
       | companies just use excel spreadsheets for traceability, the
       | majority of large companies seem to use JIRA tickets alongside
       | confluence. While those arent the only methods, they seem to be
       | 90% of the packages I review.
        
         | robertlagrant wrote:
         | Health tech - we also use this combo. The Jira test management
         | plugin XRay is pretty good if you need more traceability.
        
           | The_rationalist wrote:
        
           | scruple wrote:
           | Exactly the same process for us, also in healthcare and
           | medical devices.
        
           | rubidium wrote:
           | Xray and R4J plugins make it pretty nice in JIRA... as far as
           | traceability goes it's MUCH more user friendly than DOORS.
        
         | gourneau wrote:
         | We have been working software for FDA submissions as well. We
         | use Jama https://www.jamasoftware.com/ for requirements
         | management and traceability to test cases.
        
           | sam_bristow wrote:
           | I have also used Jama in a couple of companies. One for
           | medical devices and one doing avionics. My experience is that
           | it's quite similar to Jira in that if it's set up well it can
           | work really well. If it's set up poorly it is a massive pain.
        
         | jhirshman wrote:
         | hi, we're trying to build a validated software environment for
         | an ELN tool. I would be interested in learning more about your
         | experience with this software review if you could spare a few
         | minutes -- jason@uncountable.com
        
         | spaetzleesser wrote:
         | I would love to see how other companies do it. I understand the
         | need for traceability but the implementation in my company is
         | just terrible. We have super expensive systems that are very
         | tedious to use. The processes are slow and clunky. There must
         | be a better way.
        
       | idealmedtech wrote:
       | We're an FDA regulated medical device startup, with a pretty low
       | budget for the moment. Our current setup is two pronged, in-
       | house, and automated.
       | 
       | The first piece is the specification documents, which are simple
       | word docs with a predictable format. These cover how the software
       | SHOULD be implemented. From these documents, we automatically
       | generate the mission critical code, which ensures it matches what
       | we say it does in the document. The generator is very picky about
       | the format, so you know right away if you've made a mistake in
       | the spec document. These documents are checked into a repo, so we
       | can tag version releases and get (mostly) reproducible builds.
       | 
       | The second piece is the verification test spreadsheet. We start
       | this by stating all assumptions we make about how the code should
       | work, and invariants that must hold. These then are translated
       | into high level requirements. Requirements are checked using
       | functional tests, which consist of one or many verification
       | tests.
       | 
       | Each functional test defines a sequence of verification tests.
       | Each verification test is a single row in a spreadsheet which
       | contains all the inputs for the test, and the expected outputs.
       | The spreadsheet is then parsed and used to generate what
       | essentially amounts to serialized objects, which the actual test
       | code will use to perform and check the test. Functional test code
       | is handwritten, but is expected to handle many tests of different
       | parameters from the spreadsheet. In this way, we write N test
       | harnesses, but get ~N*M total tests, M being average number of
       | verification tests per functional test.
       | 
       | All test outputs are logged, including result, inputs, expected
       | outputs, actual outputs, etc. These form just a part of future
       | submission packages, along with traceability reports we can also
       | generate from the spreadsheet.
       | 
       | All of this is handled with just one Google Doc spreadsheet and a
       | few hundred lines of Python, and saves us oodles while catching
       | tons of bugs. We've gotten to the point where any changes in the
       | spec documents immediately triggers test failures, so we know
       | that what we ship is what we actually designed. Additionally, all
       | the reports generated by the tests are great V&V documentation
       | for regulatory submissions.
       | 
       | In the future, the plan is to move from word docs + spreadsheets
       | to a more complete QMS (JAMA + Jira come to mind), but at the
       | stage we are at, this setup works very well for not that much
       | cost.
        
       | stblack wrote:
       | How is Rational (IBM) Requisite Pro these days?
       | 
       | Used that 15-20 years ago and loved it. Any present day insight
       | on this?
        
       | vaseko wrote:
       | disclosure: I'm involved in the product mentioned -
       | https://reqview.com
       | 
       | Based on our experience with some heavyweight requirements
       | management tools we tried to develop quite the opposite a simple
       | requirements management tool. It is not open source but at least
       | it has a open json format - good for git/svn, integration with
       | Jira, ReqIF export/import, quick definition of requirements,
       | attributes, links and various views. See https://reqview.com
        
       | [deleted]
        
       | boopboopbadoop wrote:
       | Writing stories/tasks in such a way that each acceptance criteria
       | is something that is testable, then having a matching acceptance
       | test for each criteria. Using something like Cucumber helps match
       | the test to the criteria since you can describe steps in a
       | readable format.
        
       | tmaly wrote:
       | I write have Gherkin use cases. It works well as it is plain
       | English. This makes it easy to have in a wiki while also being
       | part of a repo.
        
       | Cerium wrote:
       | We used to use MKS and switched to Siemens Polarion a few years
       | ago. I like Polarion. It has a very slick document editor with a
       | decent process for working on links between risks,
       | specifications, and tests. Bonus points for its ability to
       | refresh your login and not loose data if you forget to save and
       | leave a tab for a long time.
       | 
       | For a small team you can probably build a workable process in
       | Microsoft Access. I use access to track my own requirements
       | during the drafting stage.
        
       | zild3d wrote:
       | I was at lockheed martin for a few years where Rational DOORS was
       | used. Now at a smaller startup (quite happy to never touch DOORS
       | again)
       | 
       | I think the common answer is you don't use a requirements
       | management tool, unless it's a massive system, with System
       | Engineers who's whole job is to manage requirements.
       | 
       | Some combination of tech specs and tests are the closest you'll
       | get. Going back to review the original tech spec (design doc,
       | etc) of a feature is a good way to understand some of the
       | requirements, but depending on the culture it may be out of date.
       | 
       | Good tests are a bit closer to living requirements. They can
       | serve to document the expected behavior, and check the system for
       | that behavior
        
       ___________________________________________________________________
       (page generated 2022-04-19 23:01 UTC)