[HN Gopher] When I do TDD and when I don't
       ___________________________________________________________________
        
       When I do TDD and when I don't
        
       Author : adrianomartins
       Score  : 62 points
       Date   : 2022-02-16 10:20 UTC (1 days ago)
        
 (HTM) web link (www.codewithjason.com)
 (TXT) w3m dump (www.codewithjason.com)
        
       | pictur wrote:
       | TDD is something I have never tried before. but since I like to
       | prepare a certain checklist first and develop on it, I guess I
       | can.
        
       | bobobob420 wrote:
       | Just write the code and then write the test. Or just write the
       | test and then write the code. Or think about the test when
       | writing the code. Or think about the code when writing the test.
       | It doesn't really matter right. The thought process is going to
       | occur anyways. Focusing on resuable patterns, consistency, and
       | creating reliable and easy to read code is 10x more important
       | than the TDD drivel that is out there. Honestly, honestly, The
       | only reason we discuss this crap is because there are people who
       | make off running around the country giving talks and writing
       | books on all things developers (nothing wrong with that).
        
       | [deleted]
        
       | strife25 wrote:
       | I've had the most luck w/ TDD for designing APIs.
       | 
       | When I write the tests first, it gives me a feel for how
       | consumers of the API will use it. It's a great method to
       | understand the UX of the re-usable code.
        
         | DerArzt wrote:
         | I'm re-reading PP[1] right now and their Tip #67 feels relevant
         | here:
         | 
         | > A test is the first user of your code
         | 
         | [1] The Pragmatic Programmer 20th Anniversary Edition (Thomas,
         | Hunt)
        
         | strife25 wrote:
         | When it comes to how much testing should be done, I think it's
         | contextual on the project.
         | 
         | I expand on this specific point in a blog post:
         | https://www.buildthestage.com/just-enough-automated-tests/
        
       | sdoering wrote:
       | I am not a programmer. And I am by far not good enough in any
       | language to spontaneously think architectural enough to create
       | unit tests and do TDD.
       | 
       | I always feel like I should be doing more tests. That this is the
       | correct way to do things.
       | 
       | Being a data/web analyst I have to say that I once created
       | extensive tests for a specific part of custom javascript logic
       | setup by a former agency of my client in the tag management
       | solution they used.
       | 
       | I even had to mock basic functionality from the tag manager
       | (Adobe Launch) to make it work locally.
       | 
       | When I took over it was a mess of code that mapped url parameters
       | to their specific marketing channel logic (you could have done
       | this purely in Adobe Analytics, though - but I was never able to
       | find a way to explain the unnecessary complexity of the
       | implemented solution).
       | 
       | In the end I had created around 60 test cases to ensure this
       | fragile bit was in a way that enabled refactoring. It worked.
       | 
       | Especially when around one year later, a colleague of mine who
       | had taken over the client needed to change stuff in there, had it
       | break and asked me. I thought his solution should work, but the
       | tests told a different story. Within half an hour we had it
       | nailed down, fixed and running flawlessly.
       | 
       | If I ever have such a complex piece of logic anywhere I will
       | surely learn how to write tests in the respective language of
       | choice. Until then I will happily fatfingered stuff on my own
       | amateur projects, though.
        
       | orobinson wrote:
       | I find TDD is great when you have an interface to some code
       | defined and are changing the behaviour of what happens when
       | interacting with that interface. You can then add some tests to
       | the existing test suite for the interface, watch them fail, then
       | get to work implementing and be content that you're done when all
       | the tests are passing.
       | 
       | However, if you're still developing the interface definition
       | itself, TDD just wastes time when it transpires the interface you
       | envisaged and wrote tests for doesn't quite match up with the
       | implementation as you write it.
        
         | efsavage wrote:
         | > However, if you're still developing the interface definition
         | itself, TDD just wastes time when it transpires the interface
         | you envisaged and wrote tests for doesn't quite match up with
         | the implementation as you write it.
         | 
         | This sounds like TDD is WAI in this case. If the tests break
         | because of the implementation of the interface, either the
         | tests were wrong (likely because the interface was poorly or
         | incompletely described) or the requirements changed (and broken
         | tests means you had decent coverage). Catching this with a
         | simple test run during initial development is far cheaper than
         | debugging against someone else's expectations in the future.
        
         | Jenk wrote:
         | That's missed the beat a bit.. you are supposed to let the
         | tests drive that design out, not big-design upfront and then
         | fill out tests.
        
           | agonz253 wrote:
           | John Ousterhout has a different take on this in "A Philosophy
           | of Software Design":
           | 
           |  _The problem with test-driven development is that it focuses
           | attention on getting specific features working, rather than
           | finding the best design. This is tactical [as opposed to
           | strategic] programming pure and simple, with all of its
           | disadvantages. Test-driven development is too incremental: at
           | any point in time, it's tempting to just hack in the next
           | feature to make the next test pass. There's no obvious time
           | to design, so it's easy to end up with a mess.
           | 
           | One place where it makes sense to write the tests first is
           | when fixing bugs. Before fixing a bug, write a unit test that
           | fails because of the bug. Then fix the bug and make sure that
           | the unit test now passes. This is the best way to make sure
           | you really have fixed the bug. If you fix the bug before
           | writing the test, it's possible that the new unit test
           | doesn't actually trigger the bug, in which case it won't tell
           | you whether you really fixed the problem._
        
             | BurningFrog wrote:
             | > * There's no obvious time to design*
             | 
             | Design time is the refactoring after your tests pass.
             | 
             | That's when you can move your code around with confidence,
             | since you can lean on your tests.
        
               | thibauts wrote:
               | You can move the code that hides behind the interfaces
               | that you test, but you can't redesign these interfaces
               | which is the essence of design.
        
               | agonz253 wrote:
               | Those tests themselves may need to change if significant
               | refactoring is involved. Doing so after the refactoring
               | seems to defeat the purpose of using TDD in the first
               | place?
        
               | nybble41 wrote:
               | Refactoring should not break your tests. You wrote them
               | without knowing anything about the implementation, so how
               | can they break when you change _only_ the implementation
               | and not the user interface, results, or side effects? (If
               | you change any of those you 've moved beyond mere
               | refactoring and need to start the TDD process over with
               | new tests.)
        
               | agonz253 wrote:
               | Why the assumption that _only_ the implementation needs
               | to change, especially when significant refactoring is
               | involved? The interfaces themselves may need to change
               | when striving for a better system design. This is also
               | part of refactoring.
        
               | nybble41 wrote:
               | If you are changing interfaces then you are changing
               | requirements (which pertain to those interfaces), which
               | means you must change your tests.
               | 
               | What you are describing is not _refactoring_ : "In
               | computer programming and software design, code
               | refactoring is the process of restructuring existing
               | computer code--changing the factoring--without changing
               | its external behavior."[0] The interface is an integral
               | part of external behavior.
               | 
               | With that said, what is "internal" vs. "external"
               | behavior is... fluid. A large-scale refactoring might
               | involve changes to internal components with their own
               | interfaces and tests. The tests for the component _being_
               | refactored should not be affected, but in the course of
               | refactoring the larger component you might make more
               | extensive changes--not just refactoring--to the smaller
               | pieces making up its internal architecture. For that you
               | would need to design new interfaces  & requirements for
               | the internal components, write new tests, and then
               | iterate on the implementation until the tests pass, just
               | as with any other TDD process. In the meantime you have
               | the unmodified tests for the larger component to verify
               | that your new internal architecture still satisfies those
               | higher-level requirements.
               | 
               | [0] https://en.wikipedia.org/wiki/Code_refactoring
        
               | pydry wrote:
               | This works provided you didnt encode your design mistakes
               | _in_ the tests themselves. Then youve just doubled the
               | cost of fixing that mistake.
               | 
               | It's hard to overstate how common this is.
        
               | Jtsummers wrote:
               | > This works provided you didnt encode your design
               | mistakes in the tests themselves. Then youve just doubled
               | the cost of fixing that mistake.
               | 
               | If your design mistakes are in your code and your tests,
               | they'll also likely be in your mental model and your
               | documentation (if it exists). At which point you what?
               | Throw it all away and start over? In extreme cases, but
               | in others you do partial rewrites and refactors. Address
               | the issues.
               | 
               | But not having tests because they may encode your design
               | mistakes is as foolish as not having documentation or
               | comments because they may also encode your design
               | mistakes. In a few years, you'll have a blob of code
               | that, in the best case, is perfectly readable and
               | comprehensible. But, in reality, is likely to fail at
               | communicating the overall design intent and requirements.
               | 
               | And so what if it doubles the cost of fixing the mistake?
               | You made a mistake and it had to be fixed anyways. In the
               | end, I've never found tests to cost more than they saved.
               | A lack of tests has always led to higher development
               | costs (primarily as measured in time to delivery).
               | Regressions are ludicrously common without tests, which
               | eats away at your time (and therefore adds to your costs)
               | very quickly.
        
               | pydry wrote:
               | >If your design mistakes are in your code and your tests,
               | they'll also likely be in your mental model and your
               | documentation (if it exists). At which point you what?
               | Throw it all away and start over?
               | 
               | Kinda, yeah. Or huge chunks of it. Thats what the OP
               | said. I've done a lot of that.
               | 
               | Was that a rhetorical question?
               | 
               | >And so what if it doubles the cost of fixing the
               | mistake? You made a mistake and it had to be fixed
               | anyways.
               | 
               | Well, then assuming you need 1 month to do a product
               | iteration and 1 to do tests and docs on it and you need 5
               | iterations before landing on the "right" requirements to
               | meet product-market fit, that requires 10 months to get
               | to fully tested code doing the right thing with TDD and 6
               | without.
               | 
               | What if your runway is 8?
               | 
               | >In a few years, you'll have a blob of code that, in the
               | best case, is perfectly readable and comprehensible. But,
               | in reality, is likely to fail at communicating the
               | overall design intent and requirements.
               | 
               | I've been on both sides of this problem and I honestly
               | think that this is _much_ less dangerous than not
               | iterating on requirements fast enough.
               | 
               | I've bailed a company out of a massive technical debt
               | hangover before. It's horrendous but it's not usually
               | fatal. But solving the wrong problem? Not finding the
               | right problem soon enough? That's usually fatal.
        
               | MrMan wrote:
               | yes design mistakes cost much more to a project than
               | broken implementation in my experience. broken code is
               | harmless compared to a codebase that has to be rewritten
               | after the true requirements finally reveal themselves
        
             | nightski wrote:
             | Personally I find this a curious observation. I honestly
             | rarely do TDD myself, but find it very similar to
             | functional programming. Learning Haskell the big thing I
             | picked up from it was top down design vs. bottom up design.
             | What you describe is more akin to bottom up design. But I
             | find TDD and even more so functional programming promote
             | top down design which is the "strategic" approach.
             | 
             | But it just goes to show any technique (including TDD) is
             | not going to force you into good habits. This needs to be
             | learned with experience.
        
           | sockpuppet69 wrote:
        
           | eric-hu wrote:
           | My experience agrees with the GP, orobinson. I used to work
           | at a company that wanted us to TDD everything. Everyone
           | working there knew this from the time of interview, so it's
           | not like any of us were unwilling to give it a shot. When it
           | came to adding a feature to a Rails app, the interface is
           | pretty well defined, it's going to serve up HTML and js, it's
           | got to fit on the existing structure of the page or conform
           | to a design.
           | 
           | Adjacent parts of our company were working on a Heroku clone
           | PaaS, and I got some time on a team building an auto scaler
           | service to spin up and down other services. We didn't know
           | how our service would kill and start other services. Tests
           | were changing as much as production code while we worked on
           | that. A lot of our work was spiking to see how our service
           | fit into the ecosystem of other services.
        
             | Jenk wrote:
             | > I used to work at a company that wanted us to TDD
             | everything.
             | 
             | Dogma is the bain of everyone :) I will never advocate
             | "100% TDD." It is a tool like any other that has its place
             | and, more importantly, doesn't fit everywhere. Though I
             | will freely admit I do encourage people to use it more than
             | what I have seen to be the typical amount.
             | 
             | > When it came to adding a feature to a Rails app, the
             | interface is pretty well defined, it's going to serve up
             | HTML and js, it's got to fit on the existing structure of
             | the page or conform to a design.
             | 
             | It's very unlikely I'd use TDD on that.
             | Boilerplate/plumbing is something I rarely use/advocate TDD
             | for - it's the "clever" stuff you want to cover. The stuff
             | that is following rules outside of the code itself.
             | 
             | > Tests were changing as much as production code while we
             | worked on that.
             | 
             | Absolutely nothing wrong with that, imo.
        
               | thibauts wrote:
               | Exactly this. Encoding business requirements or expected
               | output of a complex algorithm by writing tests before
               | code can make some sense. Writing tests for something
               | that has no defined shape yet doesn't.
        
           | tarkin2 wrote:
           | Changing and newly-discovered requirements, rather than
           | tests, drive your code.
        
             | Jenk wrote:
             | Disagree. You are developing to a set of requirements at
             | anyone time. What emerges from (A)TDD is the implementation
             | of those requirements and the design therein.
        
               | tarkin2 wrote:
               | When those requirements change, and they can very, very
               | quickly, you need to rewrite your tests--fine if you
               | don't mind spending that time.
        
               | BurningFrog wrote:
               | I don't understand the alternative, unless it is to not
               | have tests?
        
               | pydry wrote:
               | The dirty secret of our industry is that a lot of the
               | time, we don't.
               | 
               | It's frequently scorned but Im not so sure it's always
               | the worst idea in the world.
               | 
               | I've built shoddy hacked together systems that made
               | customers happy and wasted months building heavily tested
               | well structured code doing something nobody wants.
               | 
               | Well engineered tests take a lot of time to build -
               | sometimes 2x the code itself. If you've built the wrong
               | thing youve paid 3x the cost to figure it out before
               | going back to the drawing board.
               | 
               | I'm a big believer in retrofitting tests once code has
               | proven itself useful.
        
               | Jenk wrote:
               | Code changes when requirements change. I don't think
               | anyone should be surprised by that.
        
               | AnimalMuppet wrote:
               | Hopefully you only need to change _some of_ your tests.
               | 
               | When your requirements change, and you change your code,
               | you have two questions. First, did my code change do what
               | is needed for the requirements change? You modify tests
               | (or write new ones) to answer that.
               | 
               | Second, did I break anything in the process? The
               | remaining tests answer _that_.
        
               | monocasa wrote:
               | I've found the flow true for some environments with
               | extremely well defined structures for doing any work
               | (classic backend API servers being one example), but it's
               | definitely not true for all envs. A lot of envs have you
               | spiking so much on just how to even approach the feature
               | that writing tests up front is wasteful overhead.
        
               | cgrealy wrote:
               | If you are spiking, then you should be throwing away that
               | code.
               | 
               | Don't productionise prototypes.
        
               | monocasa wrote:
               | Don't just throw untested prototypes in production, but
               | you don't gain anything in a lot of envs by throwing away
               | a prototype completely. You're likely to introduce
               | additional issues by rewriting it for no reason.
        
               | AnimalMuppet wrote:
               | Well, if a spike is going to become production, when it
               | does, _then_ write the tests (along with all the other
               | cleanup that needs to happen if your going to turn
               | prototype code into production).
        
               | monocasa wrote:
               | Yes, but that model is distinctly not TDD. You came to a
               | design, then backfilled tests.
        
               | AnimalMuppet wrote:
               | Only if you keep the design of the spike when you convert
               | it to production.
               | 
               | But that should be part of the "production-izing"
               | process, right? You convert it to a solid design instead
               | of a quick-and-dirty hack?
        
               | monocasa wrote:
               | Most of the time with this model, it's not a quick and
               | dirty hack that is the result of spiking, but the design
               | fleshed out with maybe stubs for a few of the low risk
               | (to the overall design) error pathways.
               | 
               | The whole point was getting to a design that fulfills the
               | feature requirements but for which the contract
               | boundaries weren't known going in.
        
             | AnimalMuppet wrote:
             | But, given the changing and newly-discovered requirements,
             | letting tests drive the design of the implementing code can
             | still be surprisingly effective.
             | 
             | Why? Because if I find it hard to write the test, it's
             | telling me that I'm likely to find it hard to write non-
             | test code that uses the class/module/subsystem/whatever. It
             | forces you to _use_ the interface to your code. If it 's
             | hard to use, that's telling you to consider changing the
             | design.
        
             | squeaky-clean wrote:
             | Requirements drive your tests, which drive your code.
        
               | sidlls wrote:
               | One doesn't design around tests, unless the test code is
               | the design spec. In which case something has gone
               | horribly wrong in the requirements and analysis for the
               | project.
        
               | cgrealy wrote:
               | > One doesn't design around tests
               | 
               | No design gets to the coding stage until someone has
               | answered "how will this be tested?"
               | 
               | So, yeah, you should absolutely be designing around
               | tests.
        
               | drran wrote:
               | TDD is just about replacing of manual testing, during
               | development, with automated testing, to improve ROI of
               | automated unit test cases.
               | 
               | > Requirements drive your tests, which drive your code.
               | 
               | It's verification of implementation, not a unit testing,
               | so it's a VDD, not a TDD.
        
               | Jenk wrote:
               | Test _Driven_ Design.
               | 
               | It's in the name.
               | 
               | Sure, just writing tests is entirely about automation,
               | but TDD is more than just writing tests.
        
       | somewhereoutth wrote:
       | Either testing finds the bugs, or the user finds the bugs.
       | 
       | If the user is a machine, use a machine to test the code. If the
       | user is a human, use a human.
       | 
       | The user does not know or care about unit tests.
       | 
       | The state space of any non-trival algorithm is far greater than
       | can be sensibly enumerated.
       | 
       | Software is design - designing something inside out makes no
       | sense.
        
       | erdo wrote:
       | I do TDD in one specific case, and it's always a unit test. Very
       | occasionally I will need to write a function or a small class do
       | some complicated logic that I'm too lazy or stupid to work out
       | how to do. I do know exactly what the results should look like
       | though, including all the possible edge cases.
       | 
       | ( I think the last time I did this was for a point of sale
       | terminal I was working on, which needed a solution similar to the
       | change making problem https://en.m.wikipedia.org/wiki/Change-
       | making_problem )
       | 
       | Anyway, for those situations, I write a large number of tests
       | cases, covering every reasonable scenario, plus a bunch of
       | unreasonable scenarios.
       | 
       | Then I write a half-assed implementation that fails on several
       | tests, and I keep hacking about until more of the tests pass.
       | Once they all pass, I stop. Even if at that point I have no idea
       | why that particular version of the code completely works.
       | 
       | It's nasty I know, but sometimes it's the quickest way to a
       | robust implementation
        
       | Anon6747463 wrote:
       | TDD is climbing with ropes; it's useful in situations where you
       | don't know how to get all the way to the top but want to try
       | stuff from where you've gotten to without falling all the way to
       | the bottom.
        
         | pc86 wrote:
         | Isn't the argument TFA is making the opposite, though? That TDD
         | only really makes sense when you're working a very "crisply
         | defined" specification?
        
           | eesmith wrote:
           | Yes, that's my interpretation.
           | 
           | This reminded me of when Ron Jeffries tried to use TDD to
           | write a Sudoku solver, but didn't manage to do it. The take-
           | home lesson was summarized at
           | https://www.infoq.com/news/2007/05/tdd-sudoku/ as "while TDD
           | may not be the best tool for inventing new algorithms, it may
           | very well be the best tool for applying those algorithms to
           | the problem at hand."
           | 
           | In this context, I think of "crisply defined" as being when
           | know the algorithms, and input/output format, and want to get
           | things working together.
           | 
           | I still don't find TDD useful. Even in that case, I much more
           | a spike-then-stabilize developer, with most of my tests added
           | at the end, followed by coverage analysis to identify missing
           | tests.
           | 
           | (And I don't believe for a minute the claims of TDD
           | supporters that TDD naturally leads to 100% coverage.)
        
             | pc86 wrote:
             | > _And I don 't believe for a minute the claims of TDD
             | supporters that TDD naturally leads to 100% coverage._
             | 
             | I agree with you, but at the risk of No True Scotsman'ing
             | this, one of the TDD tenets I'm familiar with is that you
             | write the least amount of code necessary to pass a test. So
             | if you do it The Right Way, then you should be at 100% or
             | something very very close to that.
        
               | eesmith wrote:
               | Yes, you might start that way. Then over time you lose
               | it, unless you are actively doing coverage testing.
               | 
               | Here's the clearest example. For a while I had a code
               | base which supported both Python 2 and Python 3. I
               | dropped Python 2 support and removed the compatibility
               | layer.
               | 
               | I'm _still_ finding places where I have Python 2 code
               | paths. (Clearly I 'm not using enough coverage testing.)
               | 
               | Here's another example. Suppose I have a single public
               | entry point, which internally calls a number of private
               | units. I've carefully tested only at that public entry
               | point so my test coupling isn't an issue.
               | 
               | I then realize that I can special case (say) n=0 early in
               | the public entry point. Now, a number of the private
               | units no longer need to handle the n=0 case.
               | 
               | Manual inspection during refactoring might catch all of
               | those code paths which are no longer used. I know I'm not
               | diligent enough to find all those cases. I've even had
               | times where I find an entire function is no longer being
               | called at all, because a refactoring removed the need to
               | have it.
               | 
               | Once you've had a few refactorings of a non-trivial code
               | base, even during its greenfield development phase,
               | you're almost certainly going to have dead code. Even
               | when using TDD.
               | 
               | No, I don't know what the coverage rate would be on a TDD
               | project which doesn't use code coverage.
               | 
               | But my experience tells me not to believe Kent Beck's
               | statement "TDD followed religiously should result in 100%
               | statement coverage".
        
       | bobobob420 wrote:
        
       | thatswrong0 wrote:
       | Agree with the reasoning here, so as a result I almost never use
       | TDD: I pretty much always have to spike + iterate in order to
       | come to a reasonable and clean solution for whatever I'm working
       | on.. no matter how much I've tried to spec things out exactly how
       | they should work, it always ends up being different. I'm also
       | just not great at abstract vs. hands on thinking.
       | 
       | On top of this, I've also come to find unit tests to generally
       | require more work and be less useful for identifying regressions
       | than integration tests for the work I do. Maybe this is just a
       | result of the kind of work I do though.
        
         | commandlinefan wrote:
         | > less useful for identifying regressions
         | 
         | I've observed the same, but having the ability to run any given
         | function in isolation is huge for diagnosing problems. Does
         | this function respond correctly to the correct input? Yes? Ok,
         | that's not where the problem is. Next.
        
         | bvirb wrote:
         | Same here. So much so we've basically just dropped "unit"
         | testing in favor of integration testing. I guess we still call
         | it TDD though.
         | 
         | I was actually surprised what the author was describing was
         | somehow _not_ TDD. I thought it was pretty normal to try some
         | things out, make some decisions, and then return to codifying
         | those decisions in tests.
         | 
         | We do web app development, and 90% of our tests are browser-
         | based integration tests that test whether the given inputs
         | (usually forms) lead to the expected outputs (usually something
         | printed on a web page).
         | 
         | I wonder if this is more natural when you're mostly writing
         | integration tests?
        
           | leemcalilly wrote:
           | I find my system/integration tests to be the most valuable
           | because those test the behavior of my code, what the end user
           | cares about.
           | 
           | It often helps to start with the design first (even if just a
           | napkin sketch). Once I've done that I usually have enough
           | information to write system tests (which also forces me to
           | consider edge cases). Then I can then fill in with unit tests
           | as needed as I write the code to get the system tests for
           | that feature passing.
        
           | jasonswett wrote:
           | I personally don't see any reason why the type of test
           | matters when doing TDD. As I see it you can do TDD with
           | integration tests just as much as you can do TDD with unit
           | tests. The only difference to me is that the work involved
           | might take on a bit of a different character. But that
           | doesn't make it not TDD to me.
        
           | pydry wrote:
           | I've found TDD can work pretty well with integration tests.
           | Often better.
           | 
           | I dont really get why the practise is so intrinsically linked
           | to the practise of writing unit tests.
           | 
           | Increasingly I'm starting to believe unit tests are a scam,
           | but TDD is going to stick around in one form of another
           | forever.
        
           | [deleted]
        
         | no_wizard wrote:
         | I think this highlights that the core of TDD is understated.
         | The core of TDD is encapsulated in the saying _Red, Green,
         | Refactor_
         | 
         | - Write the Test, they're going to fail cause you have no
         | implementation
         | 
         | - Write the implementation, so that those tests are green
         | 
         | - Refactor to clean up the code, make it real nice
         | 
         | Rinse and repeat as needed, until you've settled on a fully
         | tested solutions. If you do this, you are practicing TDD as its
         | intended. I think the overall message around TDD, how it gets
         | talked about in industry, and how its gotten promoted (or not
         | promoted, I guess?) makes it so confusing. This is the heart of
         | it here, is to rinse and repeat these 3 steps, always starting
         | with writing Tests first, which is to validate that you
         | _understand what you 're building_
         | 
         | What I have found in my decade+ doing this is that most of the
         | time when I run into team members who draw blanks or feel that
         | TDD is a roadblock is that's one way or another they don't have
         | enough information about the requirements of the work involved
         | in what they're doing.
         | 
         | Not sure if this helps anyone or not, this has been my general
         | experience and may not capture every case, though I feel
         | confident enough that this is a shared experience that I hope
         | it brings another way to think about TDD in terms of
         | simplicity.
        
           | michaelpb wrote:
           | Yeah, I TDD / TFD as much as I can, much simpler. Even just
           | stubbing out a few simple smoke tests that I
           | red/green/refactor saves so much time. For me, I think I
           | spend less time writing code as well, since it forces me to
           | be clear to myself about requirements early on (even if the
           | story/ticket wasn't clear enough), and keeps a "check" on my
           | natural tendency to go down rabbit-holes before properly
           | speccing out non-essential features or tempting
           | optimizations. I ask myself: Does this help me pass the tests
           | and complete the feature? If no, then I need to sit on my
           | hands and not get all code-cowboy trigger-happy :)
        
           | eesmith wrote:
           | I've never liked what's missing under the red-green-refactor
           | TDD description. Perhaps your experience can offer insights?
           | 
           | Consider Martin's primes kata, at http://www.butunclebob.com/
           | ArticleS.UncleBob.ThePrimeFactors... . The final version is:
           | public class PrimeFactors {         public static
           | List<Integer> generate(int n) {           List<Integer>
           | primes = new ArrayList<Integer>();                for (int
           | candidate = 2; n > 1; candidate++)             for (;
           | n%candidate == 0; n/=candidate)
           | primes.add(candidate);                return primes;
           | }       }
           | 
           | In red-green-refector TDD, where do I add tests that are
           | expected to pass?
           | 
           | In this case, boundary analysis says that if the function
           | takes an int, then I should include tests for negative
           | numbers, and tests for large values, like 2^31-1, which is
           | MAXINT and also a Mersenne prime.
           | 
           | (Neither of these are in Martin's tests, which only test 1,
           | 2, 3, 4, 6, 8, and 9.)
           | 
           | When should I add the test for 2^31-1? Your "Write the test"
           | says we should only write tests which will fail because it
           | has no implementation, but in this case we expect it to pass
           | because we have an implementation. Do we not write that test?
           | 
           | Which leads to my issue with the "refactor" step of "red-
           | green-refactor."
           | 
           | Suppose you add that test for MAXINT. It takes about 5
           | seconds to run because of it does >2.1B modulo tests.
           | 
           | Implicitly, TDD tests are a supposed to be fast. Not 5
           | seconds per test. Or the spec might explicitly require (say)
           | a 1ms execution time, or you might find that system tests
           | fail because this algorithm is too slow.
           | 
           | There are any number of faster factoring methods, as
           | Eratosthenes well knew, so pick one and implement it.
           | 
           | Is this in the "refactor" step? Technically "Substitute
           | Algorithm" is one of Fowler's refactorings, so yes.
           | 
           | But Fowler's refactoring are meant to make things cleaner and
           | easier to understand. Not whole-sale replacements with
           | additional complexity. In the discussions I've seen, the
           | refactor step in red-green-refactor starts and ends with the
           | same tests.
           | 
           | While a more complicated implementation may have its own set
           | of special cases to consider. (For example, a complex sorting
           | method like Timsort needs more tests than quicksort to cover
           | all the code paths.)
           | 
           | The descriptions of "red, green, refactor" TDD I've seen
           | completely ignore these issues of when to add additional
           | tests you expect to pass, and how to refactor for purposes
           | other than "make it real nice."
           | 
           | I agree that starting with MAXINT would immediately 'validate
           | that you understand what you're building.' But most TDD
           | examples seem to start with the easy cases first, not the
           | hardest. They teach an incremental design approach where
           | experience from the easy cases helps progress towards the
           | final solution. My experience is that approach can lead to an
           | implementation which requires a design methodology more
           | powerful than red-green-refactor to resolve.
        
           | thatswrong0 wrote:
           | > Write the Test, they're going to fail cause you have no
           | implementation
           | 
           | But this is where I falter every time: the test necessarily
           | depends on the implementation. I could write a test for my
           | first pass at a function... but then if I decide I need to
           | actually split that function into two or go for a different
           | approach entirely, then I have to basically scrap that test.
           | And then I've just created a bunch of friction that slows
           | down my development process.. for what gain?
           | 
           | And to your point, yes, it may have to do with the fact that
           | often my requirements are amorphous and I discover them as I
           | go. For example, say a designer wants a specific behavior,
           | but then I realize it doesn't work in an edge case that we
           | will probably hit, and fixing that edge case will take twice
           | as much time. So then I work with them to find a less time
           | consuming compromise -> bam, new implementation, new tests.
           | 
           | Or maybe it's not a user facing behavior, and I realize after
           | 10 hours there's a way simpler way of doing the thing I want.
           | Same thing -> new tests.
           | 
           | Once I've got the basic code layout in a satisfying state,
           | only then do I feel comfortable starting tests and ensuring I
           | most or all of my conditional branches -> success and error
           | cases.
           | 
           | I feel like I'm missing something
        
             | no_wizard wrote:
             | > I could write a test for my first pass at a function...
             | but then if I decide I need to actually split that function
             | into two or go for a different approach entirely, then I
             | have to basically scrap that test.
             | 
             | Then you're writing a function coupled to the test, not
             | writing logic you can plug into the test to reasonably
             | verify it works as intended.
             | 
             | That's the bit people miss I think. Tests should reflect
             | how the code is consumed, and test for that (inputs and
             | outputs, more or less) not for implementation details. Your
             | test shouldn't care if its 1 function or 7.
        
               | saurik wrote:
               | But the point was that to know how the function will be
               | consumed before you write it implies you fully specified
               | it ahead of time without the benefit of knowing how--or
               | frankly even if--it will work, which (to me) is
               | backwards.
        
               | nybble41 wrote:
               | Yes, the point of TDD is to force you to think about what
               | your requirements are--and how you will prove that they
               | have been met--from the users' (and thus the tests')
               | point of view before writing the code.
               | 
               | If you want to take a more exploratory approach and make
               | up the requirements as you go along, fine, but that won't
               | look anything like TDD.
        
       | amriksohata wrote:
       | Any surveys done worldwide as to how many people do TDD, how many
       | just wrap in tests afterwards, and how many don't unit test at
       | all ? At different N tier levels?
        
       | kevinskii wrote:
       | No matter how carefully and thoroughly we design a system up
       | front, we almost discover more elegant solutions once we start
       | writing the actual code. I suspect this is true for nearly all
       | non-trivial systems. If we were to fully implement every unit
       | test up front we would usually end up wasting a lot of time.
        
         | oweiler wrote:
         | What you describe is not TDD.
        
           | convolvatron wrote:
           | no, i took this as a good argument about why writing tests up
           | front is really poor idea if you're exploring the space to
           | find out what a good api might look like.
        
             | isbadawi wrote:
             | I think the person you're replying to is saying that "fully
             | implement every unit test up front" does not describe TDD.
             | With TDD you continuously alternate between writing tests
             | and writing production code. It's not like you write a
             | whole test suite up front.
        
             | webmobdev wrote:
             | Everything can be improved. At what point do you stop?
             | Scope creep and gold-plating ...
        
       | HNSucksAss wrote:
        
       | LouisSayers wrote:
       | Working on my own project as a solo dev, I hardly ever write
       | tests.
       | 
       | What I've learnt over the years is that tests lock your code in
       | making it hard to adapt (you could say making you less agile).
       | 
       | When I'm smashing out new code and I haven't established patterns
       | then it doesn't make sense to lock that new code in.
       | 
       | Once I do have a pattern going then I might add tests, but it's
       | not for the sake of test coverage or to gloat that I have tests,
       | it's purely because I actually care about making sure those bits
       | of the code work as intended and that the edge cases are covered.
       | 
       | For these tests I do indeed do TDD.
       | 
       | Otherwise it's a trade-off - would I rather be closer to a
       | feature people can use or would I rather be slogging away writing
       | and maintaining tests.
       | 
       | This being said, I will say that when I come to a point of adding
       | more people to the team tests will become much more important.
       | 
       | The way I see it though, not having tests and having people
       | complaining about it means that I've done something right.
        
       | jbverschoor wrote:
       | I think people confuse TDD with just writing tests
        
       | cjvirtucio wrote:
       | I find tests more helpful when the implementation is stable. It's
       | a little tedious to re-write test cases as unknowns come up.
        
       | cgrealy wrote:
       | The main value in TDD is not when you write the initial
       | implementation. Sure, it can help, but most devs are competent
       | enough to implement a function/class/whatever and have it do what
       | the programmer thinks it will do.
       | 
       | The value from TDD is documenting those assumptions. "The db will
       | always be connected before this class is used", "This function
       | assumes the user is authenticated" and so on.
       | 
       | Then in 6 months, a year or even 5, when you or someone else
       | comes back to this code, you can be reasonably certain changes to
       | it will break tests, because the tests tell you how the REST of
       | the codebase is using this.
       | 
       | Of course, that assumes a decent shelf life for the code. If you
       | can be reasonably certain you're going to handover this and never
       | touch it again, don't bother with the tests. And delete the repo
       | while you're at it.... (not comfortable with that, are we? :D )
        
         | monocasa wrote:
         | TDD is orthogonal to having tests.
        
           | simplify wrote:
           | It's not orthogonal as you can't have TDD without tests.
           | Perhaps you meant to say "you can have tests without TDD".
        
             | monocasa wrote:
             | Orthogonal as in the presence of tests representing current
             | contract assumptions gives you zero information as to
             | whether TDD was followed.
        
               | pydry wrote:
               | There are certain kinds of tests that give off a distinct
               | "TDD wasnt used here" odor.
        
       | DerArzt wrote:
       | I will recommend folks take the time to go and read the O.G. book
       | on TDD[1] to really get a feel for what it was intended to be
       | originally and form their opinion from there. Most of the time
       | when I see people talk about TDD they are either cargo cultists
       | or uninformed people.
       | 
       | [1]
       | https://www.goodreads.com/book/show/387190.Test_Driven_Devel...
        
         | cloogshicer wrote:
         | Hey, thanks for your comment, just curious, do you think the
         | author of the article is uninformed? If so, could you briefly
         | explain how so?
         | 
         | Genuinely curious, since I generally agree with what they said.
         | However, I've been recommended that book before with the same
         | sentiment and am quite curious about opposing opinions, just
         | haven't found the time to read it yet.
        
         | MonaroVXR wrote:
         | I will check the link, it's difficult to believe anyone.
        
           | DerArzt wrote:
           | The book is worth reading, even if you don't end up adding
           | TDD to your normal workflow. One of the things that stood out
           | to me about it is that Kent mentions that you should throw
           | away your lower level tests that you used while you were
           | developing and exploring the problem. I have yet to see
           | anyone ever mention that little tidbit.
        
       | [deleted]
        
       | Jenk wrote:
       | I find TDD helps me mind-map out my implementations. It's a bit
       | hand-wavy, but generally I'll use TDD when things are complex and
       | I need to clear my head of the bigger picture in favour of
       | focussing on the smaller picture (I.e., lean on the the "next
       | interesting test" mantra) to let the tests lead me through the
       | path.
       | 
       | Will I use TDD for basic things? No. I'll very probably write
       | tests for them though just to provide regression coverage.
        
       | johnklos wrote:
       | You know, not everyone thinks TDD has to do with development:
       | 
       | https://en.wikipedia.org/wiki/Telecommunications_device_for_...
        
         | lbhdc wrote:
         | I feel like the average HN reader would probably assume its
         | about testing.
        
         | exdsq wrote:
         | "When I do Telecommunications device for the deaf and when I
         | don't" doesn't make sense though which helps
        
           | johnklos wrote:
           | Ummm... There are times when using TDD makes sense over
           | trying to talk with automated menus. TDD can save lots of
           | time with certain menu systems.
           | 
           | Use your imagination some time.
        
       | bvirb wrote:
       | Is TDD really about never figuring out unknowns using untested
       | code? What about using diagrams? Or white boarding?
       | 
       | Test first design makes a lot of sense, but when you're unsure
       | about the design using code to figure it out seems just as
       | reasonable as any other method.
       | 
       | If not what's left? Do everything waterfall?
        
         | ipaddr wrote:
         | I would like to see waterfall make a comeback. No one agile's
         | alone but many waterfall.
        
           | Jtsummers wrote:
           | Waterfall does _not_ need to make a comeback, it never died.
           | It 's still a gross and stupid process for anything but
           | trivial projects or well-understood (by the developers)
           | domains. If you're working on a large scale project, it's one
           | of the worst ways to work. The next worst is to just type on
           | a keyboard and hope you manage to write code and somehow
           | trigger the build and deploy commands.
        
           | duped wrote:
           | Waterfall is alive and well, most places just refer to it as
           | something else like "agile" or "scrum."
        
             | thibauts wrote:
             | When tests amount to specs waterfall tends be be referred
             | to as TDD.
        
               | Jtsummers wrote:
               | No, because TDD (at least as defined) does not have you
               | spend months to years up front _just_ writing tests. You
               | write them at the same time as (well, in strict TDD _just
               | prior to_ ) writing the code and (possibly) other
               | artifacts. Waterfall says, "Spend a stupidly long amount
               | of time coming up with detailed requirements and specs
               | and a hyper-detailed, and likely wrong, development and
               | test plan. Then don't receive any feedback until you
               | test, which is verification and conducted after the
               | development is done, and deliver, which is validation."
               | TDD at least has you do the verification part throughout
               | the development process.
        
       | rileymat2 wrote:
       | One thing I do not see mentioned is that tests let me try out the
       | interfaces I am building as a consumer of them earlier than I
       | would otherwise. So even in exploration it can be useful,
       | sometimes.
        
       ___________________________________________________________________
       (page generated 2022-02-17 23:02 UTC)