[HN Gopher] I have complicated feelings about TDD
       ___________________________________________________________________
        
       I have complicated feelings about TDD
        
       Author : jwdunne
       Score  : 201 points
       Date   : 2022-08-18 13:28 UTC (9 hours ago)
        
 (HTM) web link (buttondown.email)
 (TXT) w3m dump (buttondown.email)
        
       | n4jm4 wrote:
       | TDD's contribution to software quality scrapes the bottom of the
       | barrel. Attention to detail in scalable design, formal
       | verification, fuzzing, and mutation testing offer deeper
       | guarantees of successful operation. But of course, the American
       | ideal "make money" is worn proudly on the rim of management
       | noses. It's the wrong prescription, but they're too busy counting
       | their bills to care. This is evident especially in cybersecurity,
       | where the posture amounts to silent prayer that no one stumbles
       | across their JRE 1.0's and their Windows XP's and Google's latest
       | attempt at a programming language with buffer overflows by design
       | --batteries included.
        
       | righttoolforjob wrote:
       | TDD is really, really bad. I won't even add arguments. TDD is
       | typically sold by Agilists from which most content deserves to go
       | in the same trash bin. Most of these people have never written
       | code for real. Their opinions are worthless. Thanks, bye.
        
       | ImPleadThe5th wrote:
       | My personal mentality about TDD is that it is an unreachable
       | ideal. Striving for it puts you on a good path, but business
       | logic is rarely so straight forward.
       | 
       | If you are lucky enough to be writing code in a way that each
       | unit is absolutely clear before you start working, awesome you
       | got it. But in business-logic-land things rarely end up this
       | clean.
       | 
       | Personally, I program the happy path then write tests and use
       | them to help uncover edge cases.
        
         | radus wrote:
         | > I program the happy path then write tests and use them to
         | help uncover edge cases.
         | 
         | This approach resonates with me as well. I would add that
         | writing tests when investigating bugs or deviations from
         | expected behavior is also useful.
        
       | madsbuch wrote:
       | To me, it really depends:
       | 
       | 1. Writing frontend code -- I've left testing all together. I'd
       | never hope to keep up with the pace
       | 
       | 2, Writing APIs -- rudimentary testing that at least catches when
       | I introduce regressions
       | 
       | 3. Writing smart contracts -- magnitudes more test than actual
       | code.
        
       | PheonixPharts wrote:
       | The trouble with TDD is that quite often we don't really know how
       | our programs are going to work when we start writing them, and
       | often make design choices iteratively as we start to realize how
       | our software should behave.
       | 
       | This ultimately means, what most programmers intuitively know,
       | that it's impossible to write adequate test coverage up front
       | (since we don't even really know how we want the program to
       | behave) or worse, test coverage gets in the way of the iterative
       | design process. In theory TDD should work as part of that
       | iterative design, but in practice it means a growing collection
       | of broken tests and tests for parts of the program that end up
       | being completely irrelevant.
       | 
       | The obvious exception to this, where I still use TDD, is when
       | implementing a well defined spec. Anytime you need to build a
       | library to match an existing protocol, well documented api, or
       | even an non-trivial mathematical function, TDD is a tremendous
       | boon. But this is only because the program behavior is well
       | defined.
       | 
       | The times where I've used TDD and it makes sense it's be a
       | tremendous productivity increase. If you're implementing some
       | standard you can basically write the tests to confirm you
       | understand how the protocol/api/function works.
       | 
       | Unfortunately most software is just not well defined up front.
        
         | jonstewart wrote:
         | It's funny, because I feel like TDD -- not just unit-testing,
         | but TDD -- is most helpful when things aren't well-defined. I
         | think back to "what's the simplest test that could fail?" and
         | it helps me focus on getting some small piece done. From there,
         | it snowballs and the code emerges. Obviously it's not always
         | perfect, and something learned along the way spurs
         | refactoring/redesign. That always strikes me as a natural
         | process.
         | 
         | In many ways I guess I lean maximalist in my practices, and
         | find it helpful, but I'd readily concede that the maximalist
         | advocates are annoying and off-putting. I once had the
         | opportunity to program with Ward Cunningham for a weekend, and
         | it was a completely easygoing and pragmatic experience.
        
         | shados wrote:
         | The big issue I see when people have trouble with TDD is really
         | a cultural one and one around the definition of tests,
         | especially unit tests.
         | 
         | If you're thinking of unit tests as the thing that catches bugs
         | before going to production and proves your code is correct, and
         | want to write a suite of tests before writing code, that is far
         | beyond the capabilities of most software engineers in most
         | orgs, including my own. Some folks can do it, good for them.
         | 
         | But if you think of unit tests as a way to make sure individual
         | little bits of your code work as you're writing them (that is,
         | you're testing "the screws" and "the legs" of the tables, not
         | the whole table), then it's quite simple and really does save
         | time, and you certainly do not need full specs or even know
         | what you're doing.
         | 
         | Write 2-3 simple tests, write a function, write a few more
         | tests, write another function, realize the first function was
         | wrong, replace the tests, write the next function.
         | 
         | You need to test your code anyway and type systems only catch
         | so much, so even if you're the most agile place ever and have
         | no idea how the code will work, that approach will work fine.
         | 
         | If you do it right, the tests are trivial to write and are very
         | short and disposable (so you don't feel bad when you have to
         | delete them in the next refactor).
         | 
         | Do you have a useful test suite to do regression testing at the
         | end? Absolutely not! In the analogy, if you have tests for a
         | screw attaching the leg of a table, and you change the type of
         | legs and the screws to hook them up, of course the tests won't
         | work anymore. What you have is a set of disposable but useful
         | specs for every piece of the code though.
         | 
         | You'll still need to write tests to handle regressions and
         | integration, but that's okay.
        
           | 0x457 wrote:
           | Many people have a wrong perception of TDD. The main idea is
           | to break a large, complicated thing into many small ones
           | until there is nothing left, like you said.
           | 
           | You're not supposed to write every single test upfront, you
           | write a tiny test first. Then you add more and refactor your
           | code, repeat until there is nothing left of that large
           | complicated thing you were working on.
           | 
           | There are also people who test stupid things and 3rd party
           | code in their tests and either they get a fatigue from it
           | and/or think their tests are well written.
        
             | pjmlp wrote:
             | How do you break those testes for a ray tracing algorithm
             | on the GPU?
        
           | Scarblac wrote:
           | And I think most people who don't write tests in code work
           | that way anyway, just manually -- they F5 the page, or run
           | the code some other way.
           | 
           | But the end result of writing tests is often that you create
           | a lot of testing tied to what should be implementation
           | details of the code.
           | 
           | E.g. to write "more testable" code, some people advocate
           | making very small functions. But the public API doesn't
           | change. So if you test only the internal functions, you're
           | just making it harder to refractor.
        
             | cogman10 wrote:
             | > But the end result of writing tests is often that you
             | create a lot of testing tied to what should be
             | implementation details of the code.
             | 
             | This is the major issue I have with blind obedience to TDD.
             | 
             | It often feels like the question of "What SHOULD this be
             | doing" isn't asked and instead what you end up with is a
             | test suite that answers the question "What is this
             | currently doing?"
             | 
             | If refactoring code causes you to refactor tests, then your
             | tests are too tightly coupled to implementation.
             | 
             | Perhaps the missing step to TDD is deleting or refactoring
             | the test at the end of the process so you better capture
             | intent rather than the flow of consciousness.
             | 
             | Example: I've seen code that had different code paths to
             | send in a test "logger" to ensure the logger was called at
             | the right locations and said the right messages. That made
             | it difficult add new information to the logger or add new
             | logger messages. And for what?
        
         | eitally wrote:
         | I still remember a project (I was the eng director and one of
         | my team leads did this) where my team lead for a new dev
         | project was given a group of near-shore SWEs + offshore SQA who
         | were new to both the language & RDBMS of choice, and also
         | didn't have any business domain experience. He decided that was
         | exactly the time to implement TDD, and he took it upon himself
         | to write 100% test coverage based on the approved specs, and
         | literally just instructed the team to write code to pass the
         | tests. They used daily stand-ups to answer questions, and
         | weekly reviews to assess themes & progress. It was slow going,
         | but it was a luxurious experience for the developers, many of
         | whom were using pair programming at the time and now found
         | themselves on a project where they had a committed & dedicated
         | senior staffer to actively review their work and coach them
         | through the project (and new tools learnings). I had never
         | allowed a project to be run like that before, but it was one
         | where we had a fairly flexible timeline as long as periodic
         | deliverables were achieved, so I used it as a kind of science
         | project to see how something that extreme would fare.
         | 
         | The result was that 1) the devs were exceptionally happy, 2)
         | the TL was mostly happy, except with some of the extra forced
         | work he created for himself as the bottleneck, 3) the project
         | took longer than expected, and 4) the code was SOOOOO readable
         | but also very inefficient. We realized during the project that
         | forcing unit tests for literally everything was also forcing a
         | breaking up of methods & functions into much smaller discrete
         | pieces than would have been optimal from both performance &
         | extensibility perspectives.
         | 
         | It wasn't the last TDD project we ran, but we were far more
         | flexible after that.
         | 
         | I had one other "science project" while managing that team,
         | too. It was one where we decided to create an architect role
         | (it was the hotness at that time), and let them design
         | everything from the beginning, after which the dev team would
         | run with it using their typical agile/sprint methodology. We
         | ended up with the most spaghetti code of abstraction upon
         | abstraction, factories for all sorts of things, and a codebase
         | that became almost unsupportable from the time it was launched,
         | necessitating v2.0 be a near complete rewrite of the business
         | logic and a lot of the data interfaces.
         | 
         | The lessons I learned from those projects was that it's
         | important to have experienced folks on every dev team, and that
         | creating a general standard that allows for flexibility in
         | specific architectural/technical decisions will result in
         | higher quality software, faster, than if one is too
         | prescriptive (either in process or in architecture/design
         | patterns). I also learned that there's no such thing as too
         | much SQA, but that's for a different story.
        
         | jwarden wrote:
         | I wish I could remember who wrote the essay with the idea of
         | tests as investment in _protecting_ functionality. When after a
         | bit of experimentation or iteration you think you have figured
         | out more or less one part of how your software should behave,
         | then you want to protect that result. It is worth investing in
         | writing and maintaining a test to make sure you don 't
         | accidentally break this functionality.
         | 
         | Functionality based on a set of initial specs and a hazy
         | understanding of the actual problem you are trying to solve
         | might on the other hand might not be worth investing in
         | protecting.
        
         | f1shy wrote:
         | This is exactly my problem with TDD. Note this problem is not
         | only in SW. For any development you do, you could start with
         | designing tests. You can do for some HW for sure. If you want
         | to apply TDD to any other development, you see pretty fast,
         | what the problem is: you are going to design lots of tests,
         | that a at the end will not be used. A total waste. Also with
         | TDD often it will be centered in quantity of tests and not so
         | much quality.
         | 
         | What I find is much much better approach is what I call
         | "detached test development" (DTD). The idea is: 2 separate
         | teams get the requirements; one team writes code, the other
         | write tests. They do not talk to each other! Fist when a test
         | is not passed, they have to discuss: is the requirement not
         | clear enough? What is the part that A thought about, but not B?
         | Assignment of tests and code can be mixed, so a team makes code
         | for requirements 1 through 100, and tests for 101 to 200, or
         | something like that. I had very very good results with such
         | approach.
        
           | EddySchauHai wrote:
           | > Also with TDD often it will be centered in quantity of
           | tests and not so much quality.
           | 
           | 100%. Metrics of quality are really really hard to define in
           | a way that are both productive and not gamified by engineers.
           | 
           | > What I find is much much better approach is what I call
           | "detached test development" (DTD)
           | 
           | I'm a test engineer and some companies do 'embed' an SDET
           | like the way you mention within a team - it's not quite that
           | clear cut, they can discuss, but it's still one person
           | implementing and another testing.
           | 
           | I'm always happy to see people with thoughts on testing as a
           | core part of good engineering rather than an
           | afterthought/annoyance :)
        
           | ivan_gammel wrote:
           | What you described is a quite common role of QA automation
           | team, but it does not really replace TDD. Separate team
           | working on a test can do it only relying on a remote contract
           | (e.g. API, UI or database schema), they cannot test local
           | contracts like a public interface of a class, because that
           | would require the that code already to be written. In TDD you
           | often write the code AND the test at the same time,
           | integrating the test and the code in compile time.
        
         | vrotaru wrote:
         | Even for something which is well defined up-front this can of
         | dubious value. Converting an positive integer less than 3000 is
         | well-defined task. Now if you try to write such a program using
         | TDD what do you think will end up with?
         | 
         | Try it. Write a test for 1, and an implementation which passes
         | that test then for 2, and so on.
         | 
         | Bellow is something written without any TDD (in Java)
         | private static String convert(int digit, String one, String
         | half, String ten)     {         switch(digit) {         case 0:
         | return "";         case 1: return one;         case 2: return
         | one + one;         case 3: return one + one + one;         case
         | 4: return one + half;         case 5: return half;         case
         | 6: return half + one;         case 7: return half + one + one;
         | case 8: return half + one + one + one;         case 9: return
         | one + ten;         default:         throw new
         | IllegalArgumentException("Digit out of range 0-9: " + digit);
         | }         }              public static String convert(int n) {
         | if (n > 3000) {         throw new
         | IllegalArgumentException("Number out of range 0-3000: " + n);
         | }              return convert(n / 1000, "M", "", "") +
         | convert((n / 100) % 10, "C", "D", "M") +             convert((n
         | / 10) % 10, "X", "L", "C") +                     convert(n %
         | 10, "I", "V", "X");     }
        
         | gregmac wrote:
         | > it's impossible to write adequate test coverage up front
         | 
         | I'm not sure what you mean by this. Why are the tests you're
         | writing not "adequate" for the code you're testing?
         | 
         | If I read into this that you're using code coverage as a metric
         | -- and perhaps even striving for as close to 100% as possible
         | -- I'd argue that's not useful. Code coverage, as a goal, is
         | perhaps even harmful. You can have 100% code coverage and still
         | miss important scenarios -- this means the software can still
         | be wrong, despite the huge effort put into getting 100%
         | coverage and having all tests both correct and passing.
        
         | mcv wrote:
         | Exactly. I use TDD in situations where it fits. And when it
         | does, it's absolutely great. But there are many situations
         | where it doesn't fit.
         | 
         | TDD is not a silver bullet, it's one tool among many.
        
         | jiggawatts wrote:
         | This is precisely my experience also. I loved TDD when
         | developing a parser for XLSX files to be used in a PowerShell
         | pipeline.
         | 
         | I created dozens of "edge case" sample spreadsheets with
         | horrible things in them like Bad Strings in every property and
         | field. Think control characters in the tab names, RTL Unicode
         | in the file description, etc...
         | 
         | I found several bugs... in Excel.
        
         | Alex3917 wrote:
         | > The trouble with TDD is that quite often we don't really know
         | how our programs are going to work when we start writing them
         | 
         | Even if you know exactly how the software is going to work, how
         | would you know if your test cases are written correctly without
         | having the software to run them against? For that reason alone,
         | the whole idea of TDD doesn't even make sense to me.
        
           | regularfry wrote:
           | Because the test you've just written (and only that test)
           | fails in the way you expect when you run the suite.
        
             | twic wrote:
             | And then passes when you write the code that you think
             | should make it pass. You do catch bugs in the tests, as
             | well as bugs in the implementation!
        
           | rcxdude wrote:
           | One reason why TDD can be a good idea is the cycle involves
           | actually testing the test cases: if you write the test, run
           | it and see that it fails, then write the code, then run the
           | test again and see that it succeeds, you can have some
           | confidence the test is actually testing something (not
           | necessarily the right thing, but at least something). Wheras
           | if you're writing the test after writing the code and expect
           | that it will succeed the first time you run it, it's quite
           | possible to write a test which doesn't actually test anything
           | and will always succeed. (There are other techniques like
           | mutation testing which may get you a more robust indication
           | that your tests actually depend on the state of your
           | software, but I've rarely seem them used in practice).
        
             | pmarreck wrote:
             | This is a great point. You're literally testing the test
             | validity as you go.
        
             | dgunay wrote:
             | Good point. Sometimes I cheat and implement before I test,
             | but often when I do that I'll comment & uncomment the code
             | that actually does the thing so I can see the tests go from
             | red to green.
             | 
             | Have been meaning to try mutation testing as a way of
             | sussing out tests that cover lines but don't actually test
             | behavior.
        
           | Byamarro wrote:
           | Most tests shouldn't be hard to read and reason about so it
           | shouldn't be a problem. In case of more complex tests, you
           | can do it like you would do it during iterative development -
           | debug tests and code to figure out what's wrong - nothing
           | changes here.
        
         | grepLeigh wrote:
         | > Unfortunately most software is just not well defined up
         | front.
         | 
         | This is true, and I think that's why TDD is a valuable exercise
         | to disambiguate requirements.
         | 
         | You don't need to take an all/nothing approach. Even if you
         | clarify 15-20% of the requirements enough to write tests before
         | code, that's a great place to begin iterating on the murky 80%.
        
         | archibaldJ wrote:
         | Thus spake the Master Programmer: "When a program is being
         | tested, it is too late to make design changes."
         | 
         | - The Tao of Programming (1987)
        
         | agumonkey wrote:
         | I remember early uml courses (based on pre Java / OO
         | languages). They were all about modules and coupling
         | dependencies. Trying to keep them low, and the modules not too
         | defined. It seems that the spirit behind this (at least the
         | only one that make sense to me) is you don't know, so you just
         | want to avoid coupling hard early, leave the room for low cost
         | adaptation while you discover how things will be.
        
           | ThalesX wrote:
           | Whenever I start a greenfield frontend for someone they think
           | I'm horrible in the first iteration. I tend to use style
           | attributes and just shove CSS in there, and once I have
           | enough things of a certain type I extract a class. They all
           | love the result but distrust the first step.
        
         | [deleted]
        
         | eyelidlessness wrote:
         | > The trouble with TDD is that quite often we don't really know
         | how our programs are going to work when we start writing them,
         | and often make design choices iteratively as we start to
         | realize how our software should behave.
         | 
         | This is a trouble I often see expressed about static types. And
         | it's an intuition I shared before embracing both. Thing is,
         | embracing both helped me overcome the trouble in _most cases_.
         | 
         | - If I have a type interface, there I have the shape of the
         | definition up front. It's already beginning to help verify the
         | approach that'll form within that shape.
         | 
         | - Each time I write a failing test, there I have begun to
         | define the expected behavior. Combined with types, this also
         | helps verify that the interface is appropriate, as the article
         | discusses, though not in terms of types. My point is that it's
         | _also_ verifying the initial definition.
         | 
         | Combined, types and tests _are_ (at least a substantial part
         | of) the definition. Writing them up front is an act of defining
         | the software up front.
         | 
         | I'm not saying this works for everyone or for every use case. I
         | find it works well _for me_ in the majority of cases, and that
         | the exception tends to be when integrating with systems I don't
         | fully understand and which subset of their APIs are appropriate
         | for my solution. _Even so_ writing tests (and even sometimes
         | types for those systems, though this is mostly a thing in
         | gradually typed languages) often helps lead me to that clarity.
         | Again, it helps me define up front.
         | 
         | All of this, for what it's worth, is why I also find the
         | _semantics_ of BDD helpful: they're explicit about tests
         | _being_ a spec.
        
         | marcosdumay wrote:
         | At this point I doubt the existence of well defined specs.
         | 
         | Regulations are always ambiguous, standards are never followed,
         | and widely implemented standards are never implemented the way
         | the document tells.
         | 
         | You will probably still gain productivity by following TDD for
         | those, but your process must not penalize too much changes in
         | spec, because it doesn't matter if it's written in Law, what
         | you read is not exactly what you will create.
        
         | generalk wrote:
         | +1 on "well defined spec" -- a lot of Healthcare integrations
         | are specified as "here's the requests, ensure your system
         | responds like this" and being able to put those in a test suite
         | and know where you're at is invaluable!
         | 
         | But TDD is _fantastic_ for growing software as well! I managed
         | to save an otherwise doomed project by rigorously sticking to
         | TDD (and its close cousin Behavior Driven Development.)
         | 
         | It sounds like you're expecting that the entire test suite
         | ought to be written up front? The way I've had success is to
         | write a single test, watch it fail, fix the failure as quickly
         | as possible, repeat, and then once the test passes fix up
         | whatever junk I wrote so I don't hate it in a month. Red,
         | Green, Refactor.
         | 
         | If you combine that with frequent stakeholder review, you're
         | golden. This way you're never sitting on a huge pile of
         | unimplemented tests; nor are you writing tests for parts of the
         | software you don't need. For example from that project: week
         | one was the core business logic setup. Normally I'd have dove
         | into users/permissions, soft deletes, auditing, all that as
         | part of basic setup. But this way, I started with basic tests:
         | "If I go to this page I should see these details;" "If I click
         | this button the status should update to Complete." Nowhere do
         | those tests ask about users, so we don't have them. Focus
         | remains on what we told people we'd have done.
         | 
         | I know not everyone works that way, but damn if the results
         | didn't make me a firm believer.
        
           | andix wrote:
           | TDD usually means that you write the tests before writing the
           | code.
           | 
           | Writing tests as you write the code is just regular and
           | proper software development.
        
             | Spivak wrote:
             | Odd, I was taught TDD as
             | 
             | 1. Write test, see that it fails the way you expect.
             | 
             | 2. Write code that makes the test pass.
             | 
             | 3. Write test...
             | 
             | and be secure that you can fearlessly refactor and not
             | backslide while you play with different ideas so long as
             | all your tests stay green.
             | 
             | I would get overwhelmed so fast if I just had 50 failing
             | tests and no implementation.
        
               | pjmlp wrote:
               | Now do that for rendering a rotating cube in Vulkan with
               | pbr shading.
        
               | imran-iq wrote:
               | That's the right way to do TDD, see this talk:
               | https://www.youtube.com/watch?v=EZ05e7EMOLM
               | 
               | One of the above comments mentions BDD as a close cousin
               | to TDD, but that is wrong as TDD is actually BDD as you
               | should only be testing behaviours, which allow you to
               | "fearlessly refactor"
        
               | bcrosby95 wrote:
               | Behavior is an unfortunate term, because the "London
               | Style" TDD sometimes is described as testing behaviors:
               | 
               | https://softwareengineering.stackexchange.com/questions/1
               | 236...
               | 
               | Which seems like the exact opposite of what the talk is
               | saying you should do.
        
               | [deleted]
        
             | patcon wrote:
             | Respectfully, I think the distinction they're making it
             | that "writing ONE failing test then the code to pass it" is
             | very different than "write a whole test suite, and then
             | write the code to pass it".
             | 
             | The former is more likely to adapt to the learning inherent
             | in the writing of code, which someone above mentioned was
             | easy to lose in TDD :)
        
           | wenc wrote:
           | The problem I've run into is that when you're iterating fast,
           | writing code takes double the time when you also have to
           | write the tests.
           | 
           | Unit tests are still easy to write but most complex software
           | have many parts that combine combinatorially and writing
           | integration tests requires lots of mocking. This investment
           | pays off when the design is stable but when business
           | requirements are not that stable this becomes very expensive.
           | 
           | Some tests are actually very hard to write -- I once led a
           | project that where the code had both cloud and on-prem API
           | calls (and called Twilio). Some of those environments were
           | outside our control but we still had to make sure they we
           | handled their failure modes. The testing code was very
           | difficult to write and I wished we'd waited until we
           | stabilized the code before attempting to test. There were too
           | many rabbit holes that we naturally got rid of as we iterated
           | and testing was like a ball and chain that made everything
           | super laborious.
           | 
           | TDD also represents a kind of first order thinking that
           | assumes that if the individual parts are correct, the whole
           | will likely be correct. It's not wrong but it's also very
           | expensive to achieve. Software does have higher order
           | effects.
           | 
           | It's like the old car analogy. American car makers used to
           | believe that if you QC every part and make unit tolerances
           | tight, you'll get a good car on final assembly (unit tests).
           | This is true if you can get it right all the time but it made
           | US car manufacturing very expensive because it required
           | perfection at every step.
           | 
           | Ironically Japanese carmakers eschewed this and allowed loose
           | unit tolerances, but made sure the final build tolerance
           | worked even when the individual unit tolerances had
           | variation. They found this made manufacturing less expensive
           | and still produced very high quality (arguably higher quality
           | since the assembly was rigid where it had to be, and flexible
           | where it had to be). This is craftsman thinking vs strict
           | precision thinking.
           | 
           | This method is called "functional build" and Ford was the
           | first US carmaker to adopt it. It eventually came to be
           | adopted by all car makers.
           | 
           | https://www.gardnerweb.com/articles/building-better-
           | vehicles...
        
             | 1123581321 wrote:
             | The automaker analogy is a better fit for the "practice" of
             | not handling errors on the assumption a function can't
             | return an unexpected value.
             | 
             | TDD is actually quite good at manufacturing methods to
             | reasonable tolerance, which the Japanese did require.
             | 
             | Higher level tests ensure the functional output is correct
             | and typically don't have built in any reliance on unit
             | tests.
        
             | dathanb82 wrote:
             | I can't remember the last time the speed at which I could
             | physically produce code was the bottleneck in a project.
             | It's all about design and thinking through and documenting
             | the edge cases, and coming up with new edge cases and going
             | back to the design. By the time we know what we're going to
             | write, writing the code isn't the bottleneck, and even if
             | it takes twice as long, that's fine, especially since I
             | generally end up designing a more usable interface as a
             | result of using it (in my tests) as it's being built.
        
             | somewhereoutth wrote:
             | > TDD also represents a kind of first order thinking that
             | assumes that if the individual parts are correct, the whole
             | will likely be correct. It's not wrong
             | 
             | In fact it is not just wrong, but _very_ wrong, as your
             | auto example shows. Unfortunately engineers are not trained
             | /socialised to think as holistically as perhaps they should
             | be.
        
               | hbn wrote:
               | If individual parts being correct meant the whole thing
               | will be correct, that means if you have a good sturdy
               | propeller and you put it on top of your working car, then
               | you have a working helicopter.
        
               | kazinator wrote:
               | The non-strawman interpretation of TDD is the converse:
               | if the individual parts are _not_ right, then the whole
               | will probably be garbage.
               | 
               | It's worth it to apply TDD to the pieces to which TDD is
               | applicable. If not strict TDD than at least "test first"
               | weak TDD.
               | 
               | The best candidates for TDD are libraries that implement
               | pure data transformations with minimal integration with
               | anything else.
               | 
               | (I suspect that the rabid TDD advocates mostly work in
               | areas where the majority of the code is like that. CRUD
               | work with predictable control and data flows.)
        
               | wenc wrote:
               | Yes. Agree about TDD being more suited to low dependency
               | software like CRUD apps or self contained libraries.
               | 
               | Also sometimes even if the individual parts aren't right,
               | the whole can still work.
               | 
               | Consider a function that handles all cases except for one
               | that is rare, and testing for that case is expensive.
               | 
               | The overall system however can be written to provide
               | mitigations upon composing -- eg each individual function
               | does a sanity check on its inputs. The individual
               | function itself might be wrong (incomplete) but in the
               | larger system, it is inconsequential.
               | 
               | Test effort is not a 1:1. Sometimes the test can be many
               | times as complicated to write and maintain as the
               | function being tested because it has to generate all the
               | corner cases (and has to regenerate them if anything
               | changes upstream). If you're testing a function in the
               | middle of a very complex data pipeline, you have
               | regenerate all the artifacts upstream.
               | 
               | Whereas sometimes an untested function can be written in
               | such a way where it is inherently correct from first
               | principles. An extreme analogy would be the Collatz
               | conjecture. If you start by first writing the test, you'd
               | be writing an almost infinite corpus of tests -- on the
               | flip side, writing the Collatz function is extremely
               | simple and correct up to large finite number.
        
               | crazygringo wrote:
               | This is completely counter to all my experience.
               | 
               | Computer code is an inherently brittle thing, and the
               | smallest errors tend to cascade into system crashes.
               | Showstopper bugs are generated from off-by-one errors,
               | incorrect operation around minimum and maximum values, a
               | missing semicolon or comma, etc.
               | 
               | And doing sanity check on function inputs addresses only
               | a small proportion of bugs.
               | 
               | I don't know what kind of programming you do, but the
               | idea that a wrong function becomes inconsequential in a
               | larger system... I feel like that just never happens
               | unless the function was redundant and unnecessary in the
               | first place. A wrong function brings down the larger
               | system feels like the only kind of programming I've ever
               | seen.
               | 
               | Physical unit tolerances don't seem like a useful analogy
               | in programming at all. At best, maybe in sysops regarding
               | provisioning, caches, API limits, etc. But not for code.
        
               | wenc wrote:
               | > I don't know what kind of programming you do, but the
               | idea that a wrong function becomes inconsequential in a
               | larger system... I feel like that just never happens
               | unless the function was redundant and unnecessary in the
               | first place. A wrong function brings down the larger
               | system feels like the only kind of programming I've ever
               | seen.
               | 
               | I think we're talking extremes here. An egregiously wrong
               | function can bring down a system if it's wrong in just
               | the right ways and it's a critical dependency.
               | 
               | But if you look at most code bases, many have untested
               | corner cases (which they're likely not handling) but the
               | code base keeps chugging along.
               | 
               | Many codebases are probably doing something wrong today
               | (hence GitHub issues). But to catastrophize that seems
               | hyperbolic to me. Most software with mistakes still work.
               | Many GitHub issues aren't resolved but the program still
               | runs. Good designs have redundancy and resilience.
        
               | kazinator wrote:
               | > _Also sometimes even if the individual parts aren't
               | right, the whole can still work._
               | 
               | Yes it can, but the foundation is shaky, and having to
               | make changes to it will tend to be scary.
        
               | wenc wrote:
               | Yes. Though my point is not that we should aim for a
               | shaky foundation, but that if one is a craftsman one
               | ought to know where to make trade offs to allow some
               | parts of the code to be shaky with no consequences. This
               | ability to understand how to trade off perfection for
               | time -- when appropriate -- is what distinguishes senior
               | from junior developers. The idea of ~100% correct code
               | base is an ideal -- it's achieved only rarely on very
               | mature code bases (eg TeX, SQLite).
               | 
               | Code is ultimately organic, and experienced developers
               | know where the code needs be 100% and where the code can
               | flex if needed. People have this idea that code is like
               | mathematics where if one part fails, every part fails. To
               | me if that is so, the design too tight and brittle and
               | will not ship on time. But well designed code is more
               | like an organism that has resilience to variation.
        
               | P5fRxh5kUvp2th wrote:
               | > sometimes even if the individual parts aren't right,
               | the whole can still work.
               | 
               | And in fact, fault tolerance with the assumption that all
               | of it's parts are unreliable and will fail quickly makes
               | for more fault tolerant systems.
               | 
               | The _processes and attitude_ that cause many individual
               | parts to be incorrect will also cause the overall system
               | to be crap. There's a definite correlation, but that
               | correlation isn't about any specific part.
        
             | bostik wrote:
             | > _Some tests are actually very hard to write -- I once led
             | a project that where the code had both cloud and on-prem
             | API calls_
             | 
             | I believe that this is a fundamental problem of testing in
             | all distributed systems: you are trying to test and
             | validate for _emergent behaviour_. The other term we have
             | for such systems is: chaotic. Good luck with that.
             | 
             | In fact, I have begun to suspect that the way we even think
             | about software testing is backwards. Instead of test
             | scenarios we should be thinking in failure scenarios - and
             | try to subject our software to as much of those as
             | possible. Define the bounding box of the failure universe,
             | and allow computer to generate the testing scenarios
             | within. _EXPECT_ that all software within will eventually
             | fail, but as long as it survives beyond set thresholds, it
             | gets a green light.
             | 
             | In a way... we'd need something like a bastard hybrid of
             | fuzzing, chaos testing, soak testing, SRE principles and
             | probabilistic outcomes.
        
             | pmarreck wrote:
             | > writing code takes double the time when you also have to
             | write the tests
             | 
             | this time is more than made up for by the usual subsequent
             | loss of debugging, refactoring and maintenance time, in my
             | experience, at least for anything actively being used and
             | updated
        
               | wenc wrote:
               | In theory, I agree. In practice, at least for my
               | projects, the results are mixed.
        
               | tsimionescu wrote:
               | Yes, if you were right about the requirements, even if
               | they weren't well specified. But if it turns out you
               | implemented the wrong thing (either because the
               | requirements simply changed for external reasons, or
               | because you missed some fundamental aspect), then you
               | wouldn't have had to debug, refractor or maintain that
               | initial code, and the initial tests will probably be
               | completely useless even if you end up salvaging some of
               | the initial implementation.
        
               | twic wrote:
               | No, that's a separate issue, that eschewing TDD doesn't
               | help you with.
               | 
               | With TDD, the inner programming loop is:
               | 
               | 1. form a belief about requirements
               | 
               | 2. write a test to express that belief
               | 
               | 3. write code to make that test pass
               | 
               | Without TDD, the loop is:
               | 
               | 1. form a belief about requirements
               | 
               | 2. write code to express that belief
               | 
               | 3. futz around with manual testing, REPLs, and after-the-
               | fact testing until you're sufficiently happy that the
               | code actually does express that belief
               | 
               | And in my experience, the former loop _is faster_ at
               | producing working code.
        
               | ipaddr wrote:
               | It usually works out like..                 form a belief
               | about a requirement       write a test       test fails
               | write code       test fails       add debug info to code
               | test fails no debug showing       call code directly and
               | see debug code       change assert       test fails
               | rewrite test       test succeed       output test class
               | data.. false        positive checking null equals null
               | rewrite test       test passes       forget original
               | purpose and stare at green passing tests with pride.
        
               | xxs wrote:
               | > add debug info to code
               | 
               | On a more serious note: just learn to use a debugger, and
               | add asserts, if need be. To me TDD only helps having
               | something that would run your code - but that's pretty
               | much it. If you have other test harness options, I fail
               | to see the benefits outside conference talks and books
               | authoring.
        
               | laserlight wrote:
               | Yes, so much this. I don't really understand how people
               | could object to TDD. It's just about putting together
               | what one manually does otherwise. As a bonus, it's not
               | subject to biases because of after-the-fact testing.
        
               | pjmlp wrote:
               | Test the belief of recovery from a network split in
               | distributed commit.
        
               | laserlight wrote:
               | I don't get the point. Is it something not testable? If
               | it's testable, it's TDD-able.
        
               | minimeme wrote:
               | That's my experience also! It's all about faster feedback
               | and confidence the tests provide.
        
           | tsimionescu wrote:
           | There are two problems I've seen with this approach. One is
           | that sometimes the feature you implemented and tested turns
           | out to be wrong.
           | 
           | Say, initially you were told "if I click this button the
           | status should update to complete", you write the test, you
           | implement the code, rinse and repeat until a demo. During the
           | demo, you discover that actually they'd rather the button
           | become a slider, and it shouldn't say Complete when it's
           | pressed, it should show a percent as you pull it more and
           | more. Now, all the extra care you did to make sure the
           | initial implementation was correct turns out to be useless.
           | It would have been better to have spent half the time on a
           | buggy version of the initial feature, and found out sooner
           | that you need to fundamentally change the code by showing
           | your clients what it looks like.
           | 
           | Of course, if the feature _doesn 't_ turn out to be wrong,
           | then TDD was great - not only is your code working, you
           | probably even finished faster than if you had started with a
           | first pass + bug fixing later.
           | 
           | But I agree with the GP: unclear and changing requirements +
           | TDD is a recipe for wasted time polishing throw-away code.
           | 
           | Edit: the second problem is well addressed by a sibling
           | comment, related to complex interactions.
        
             | generalk wrote:
             | > Say, initially you were told "if I click this button the
             | status should         > update to complete", you write the
             | test, you implement the code, rinse and        > repeat
             | until a demo. During the demo, you discover that actually
             | they'd        > rather the button become a slider, and it
             | shouldn't say Complete when it's        > pressed, it
             | should show a percent as you pull it more and more. Now,
             | all the        > extra care you did to make sure the
             | initial implementation was correct turns        > out to be
             | useless.
             | 
             | Sure, this happens. You work on a thing, put it in front of
             | the folks who asked for it, and they realize they wanted
             | something slightly different. Or they just plain don't want
             | the thing at all.
             | 
             | This is an issue that's solved by something like Agile
             | (frequent and regular stakeholder review, short cycle time)
             | and has little to do with whether or not you've written
             | tests first and let them guide your implementation; wrote
             | the tests after the implementation was finished; or just
             | simply chucked automated testing in the trash.
             | 
             | Either way, you've gotta make some unexpected changes. For
             | me, I've really liked having the tests guide my
             | implementation. Using your example, I may need to have a
             | "percent complete" concept, which I'll only implement when
             | a test fails because I don't have it, and I'll implement it
             | by doing the simplest thing to get it to pass. If I
             | approach it directly and hack something together I run the
             | risk of overcomplicating the implementation based on what I
             | imagine I'll need.
             | 
             | I don't have an opinion on how anyone else approaches
             | writing complex systems, but I know what's worked for me
             | and what hasn't.
        
         | smrtinsert wrote:
         | I don't see how you can develop anything without at least
         | technical clarity on what the components of your system should
         | do.
        
         | larschdk wrote:
         | I think we should try and separate exploration from
         | implementation. Some of the ugliest untestable code bases I
         | have worked with have been the result of some one using
         | exploratory research code for production. It's OK to use code
         | to figure out what you need to build, but you should discard it
         | and create the testable implementation that you need. If you do
         | this, you won't be writing tests up front when exploring the
         | solution space, but you will be when doing the final
         | implementation.
        
           | codereviewed wrote:
           | Have you ever had to convince a non-technical boss or client
           | that the exploratory MVP you wrote and showed to them working
           | must be completely rewritten before going into production? I
           | tried that once when I attempted to take us down the TDD
           | route and let me tell you, that did not go over well.
           | 
           | People blame engineers for not writing tests or doing TDD
           | when, if they did, they would likely be replaced with someone
           | who can churn out code faster. It is rare, IME, to have
           | culture where the measured and slow progress of TDD is an
           | acceptable trade off.
        
             | lanstin wrote:
             | Places where software is carrying a great deal of value
             | tend to be more like that. That is, if mistakes can cost
             | $20,000 / hour or so, then even the business will back down
             | on the push now vs. be sure it works debate.
             | 
             | As always, the job of a paid software person is to merge
             | what the product people want with what good software
             | quality requires (and what power a future version will
             | unleash). Implement valuable things in software in a way
             | that makes the future of that software better and more
             | powerful.
        
           | gabereiser wrote:
           | I think this is the reasonable approach I take. It's ok to
           | explore and figure out the what. Once you know (or the
           | business knows) then it's time to write a final spec and test
           | coverage. In the end, the mantra should be "it's just code".
        
           | is0tope wrote:
           | I've always favored exploration before implementation [1].
           | For me TDD has immense benefit when adding something well
           | defined, or when fixing bugs. When it comes to building
           | something from scratch i found it to get in the way of the
           | iterative design process.
           | 
           | I would however be more amenable to e.g. Prototyping first,
           | and then using that as a guide for TDD. Not sure if there is
           | a name for that approach though. "spike" maybe?
           | 
           | [1] https://www.machow.ski/posts/galls-law-and-prototype-
           | driven-...
        
             | tra3 wrote:
             | I find that past a certain size, even exploratory code base
             | benefits from having tests. Otherwise, as I'm hacking, I
             | end up breaking existing functionality. Then I spend more
             | time debugging trying to figure out what changed.. what's
             | your experience when it comes to more than a few hundred
             | lines of code?
        
               | is0tope wrote:
               | Indeed, but once you start getting to that point I'd
               | argue you are starting to get beyond a prototype. But you
               | raise a good point, id say if the intention is to throw
               | the code away (which you probably should) then if add as
               | few tests as will allow you to make progress.
        
           | happytoexplain wrote:
           | This makes sense, but I think many (most?) pipelines don't
           | allow for much playtime because they are too rigid and top-
           | down. At best you will convince somebody that a "research
           | task" is needed, but even that is just another thing you have
           | to get done in the same given time frame. Of course this is
           | the fault of management, not of TDD.
        
           | andix wrote:
           | Most projects don't have the budget to rewrite the code, once
           | it is working.
        
             | TheCoelacanth wrote:
             | Most project don't have the budget not to rewrite the code.
        
             | [deleted]
        
         | SomeCallMeTim wrote:
         | That's one issue with TDD. I agree 100% in that respect.
         | 
         | Another partly orthogonal issue is that design is important for
         | some problems, and you don't usually reach a good design by
         | chipping away at a problem in tiny pieces.
         | 
         | TDD fanatics insist that it works for everything. Do I believe
         | them that it improved the quality of their code? Absolutely;
         | I've seen tons of crap code that would have benefited from
         | _any_ improvement to the design, and forcing it to be testable
         | is one way to coerce better design decisions.
         | 
         | But it really only forces the first-order design at the lowest
         | level to be decent. It doesn't help at all, or at least not
         | much, with the data architecture or the overall data flow
         | through the application.
         | 
         | And sometimes the only sane way to achieve a solid result is to
         | sit down and design a clean architecture for the problem you're
         | trying to solve.
         | 
         | I'm thinking of one solution I came up with for a problem that
         | really wasn't amenable to the "write one test and get a
         | positive result" approach of TDD. I built up a full tree data
         | structure that was linked horizontally to "past" trees in the
         | same hierarchy (each node was linked to its historical
         | equivalent node). This data structure was really, really needed
         | to handle the complex data constraints the client was
         | requesting. As yes, we pushed the client to try to simplify
         | those constraints, but they insisted.
         | 
         | The absolute spaghetti mess that would have resulted from TDD
         | wouldn't have been possible to refactor into what I came up
         | with. There's just no evolutionary path between points A and B.
         | And after it was implemented and it functioned correctly--they
         | changed the constraints. About a hundred times. I'm not even
         | exaggerating.
         | 
         | Each new constraint required about 15 minutes of tweaking to
         | the structure I'd created. And yes, I piled on tests to ensure
         | it was working correctly--but the tests were all after the
         | fact, and they weren't micro-unit tests but more of a broad
         | system test that covered far more functionality than you'd
         | normally put in a unit test. Some of the tests even needed to
         | be serialized so that earlier tests could set up complex data
         | and states for the later tests to exercise, which I understand
         | is also a huge No No in TDD, but short of creating 10x as much
         | testing code, much of it being completely redundant, I didn't
         | really have a choice.
         | 
         | So your point about the design changing as you go is important,
         | but sometimes even the initial design is complex enough that
         | you don't want to just sit down and start coding without
         | thinking about how the whole design should work. And no
         | methodology will magically grant good design sense; that's just
         | something that needs to be learned. There Is No Silver Bullet,
         | after all.
        
           | ivan_gammel wrote:
           | > Another partly orthogonal issue is that design is important
           | for some problems, and you don't usually reach a good design
           | by chipping away at a problem in tiny pieces.
           | 
           | True, but... you can still design the architecture, outlining
           | the solution for the entire problem, and then apply TDD. In
           | this case your architectural solution will be an input for
           | low level design created in TDD.
        
         | lupire wrote:
         | It's Test Driven _Development_ , not Test Driven _Research_.
         | 
         | Very few critics notice this.
        
           | anonymoushn wrote:
           | Maybe you disagree with GP about whether one should always do
           | all their research without actually learning about the
           | problem by running code?
        
         | bitwize wrote:
         | And this is why you use spike solutions, to explore the problem
         | space without the constraints of TDD.
         | 
         | But spikes are written to be thrown away. You never put them
         | into production. Production code is always written against some
         | preexisting test, otherwise it is by definition broken.
        
         | karmelapple wrote:
         | For the software you're thinking about, do you have specific
         | use cases or users in mind? Or are you building, say, an app
         | for the first time, perhaps for a very early stage startup that
         | is nowhere close to market fit yet?
         | 
         | We typically write acceptance tests, and they have been helpful
         | either early on or later in our product development lifecycle.
         | 
         | Even if software isn't defined upfront, the end goal is likely
         | defined upfront, isn't it? "User X should be able to get data
         | about a car," or "User Y should be able to add a star ratings
         | to this review," etc.
         | 
         | If you're building a product where you're regularly throwing
         | out large parts of the UI / functionality, though, I suppose it
         | could be bad. But as a small startup, we have almost never been
         | in that situation over the many years we've been in business.
        
         | Buttons840 wrote:
         | When I've been serious about testing I'll usually:
         | 1. Hack in what I want in some exploratory way         2. Write
         | good tests         3. Delete my hacks from step 1, and ensure
         | all my new tests now fail         4. Re-implement what I hacked
         | together in step 1         5. Ensure all tests pass
         | 
         | This allows you to explore while still retaining the benefits
         | of TDD.
        
           | gleenn wrote:
           | There's a name for it, it's called a "spike". You write a
           | bunch of exploratory stuff, get the idea right, throw it all
           | away (without even writing tests) and then come back doing
           | TDD.
        
         | waynesonfire wrote:
         | jeez, well defined spec? what a weird concept. Instead, we took
         | a complete 180 and all we get are weekly sprints. just start
         | coding, don't spend time understanding your problem. what a
         | terrible concept.
        
         | theptip wrote:
         | I think part of what you are getting at here also points to
         | differences in what people mean by "unit test".
         | 
         | It's always possible to write a test case that covers a new
         | high-level functional requirement as you understand it; part of
         | the skill of test-first (disclaimer - I use this approach
         | sometimes but not religiously and don't consider myself a
         | master at this) is identifying the best next test to write.
         | 
         | But a lot of people cast "unit test" as "test for each method
         | on a class" which is too low-level and coupled to the
         | implementation; if you are writing those sort of UTs then in
         | some sense you are doing gradient descent with a too-small step
         | size. There is no appreciable gradient to move down; adding a
         | new test for a small method doesn't always get you closer to
         | adding the next meaningful bit of functionality.
         | 
         | When I have done best with TDD is when I start with what most
         | would call "functional tests" and test the behaviors, which is
         | isomorphic to the design process of working with stakeholders
         | to think through all the ways the product should react to
         | inputs.
         | 
         | I think the early TDD guys like Kent Beck probably assumed you
         | are sitting next to a stakeholder so that you can rapidly
         | iterate on those business/product/domain questions as you
         | proceed. There is no "upfront spec" in agile, the process of
         | growing an implementation leads you to the next product
         | question to ask.
        
           | merlincorey wrote:
           | > But a lot of people cast "unit test" as "test for each
           | method on a class" which is too low-level and coupled to the
           | implementation; if you are writing those sort of UTs then in
           | some sense you are doing gradient descent with a too-small
           | step size. There is no appreciable gradient to move down;
           | adding a new test for a small method doesn't always get you
           | closer to adding the next meaningful bit of functionality.
           | 
           | In my experience, the best time to do "test for each method
           | on a class" or "test for each function in a module" is when
           | the component in question is a low level component in the
           | system that must be relied upon for correctness by higher
           | level parts of the system.
           | 
           | Similarly, in my experience, it is often a waste of effort
           | and time to do such thorough low level unit testing on higher
           | level components composed of multiple lower level components.
           | In those cases, I find it's much better to write unit tests
           | at the highest level possible (i.e. checking
           | `module.top_level_super_function()` inputs produce expected
           | outputs or side effects)
        
             | remexre wrote:
             | > is when the component in question is a low level
             | component in the system that must be relied upon for
             | correctness by higher level parts of the system.
             | 
             | And then, property tests are more likely to be possible,
             | and IMO should be preferred!
        
               | ENGNR wrote:
               | TTRD
               | 
               | test driven re-design?
        
           | mkl95 wrote:
           | > But a lot of people cast "unit test" as "test for each
           | method on a class" which is too low-level and coupled to the
           | implementation;
           | 
           | Those tests suit a project that applies the open-closed
           | principle strictly, such as libraries / packages that will
           | rarely be modified directly and will mostly be used by
           | "clients" as their building blocks.
           | 
           | They don't suit a spaghetti monolith with dozens of leaky
           | APIs that change on every sprint.
           | 
           | The harsh truth is that in the industry you are more likely
           | to work with spaghetti code than with stable packages. "TDD
           | done right" is a pipe dream for the average engineer.
        
           | commandlinefan wrote:
           | > a lot of people cast "unit test" as "test for each method
           | on a class" which is too low-level
           | 
           | Definitely agree with you here - I've seen people
           | dogmatically write unit tests for getter and setter methods
           | at which point I have a hard time believing they're not just
           | fucking with me. However, there's a "sweet spot" in between
           | writing unit tests on every single function and writing "unit
           | tests" that don't run without a live database and a few
           | configuration files in specific locations, which (in my
           | experience) is more common when you ask a mediocre programmer
           | to try to write some tests.
        
             | icedchai wrote:
             | I'm having flashbacks to a previous workplace. I was
             | literally asked to write unit tests for getters and
             | setters. I complained they were used elsewhere in the code,
             | and therefore tested indirectly anyway. Nope, my PR would
             | not be "approved" until I tested every getter and setter. I
             | think I lasted there about 6 months.
        
           | P5fRxh5kUvp2th wrote:
           | The above poster used 'TDD', not 'unit test', they are not
           | the same thing.
           | 
           | You can (and often should!) have a suite of unit tests, but
           | you can choose to write them after the fact, and after the
           | fact means after most of the exploration is done.
           | 
           | I think if most people stopped thinking of unit tests as a
           | correctness mechanism and instead thought of them as a
           | regression mechanism unit tests as a whole would be a lot
           | better off.
        
             | geophile wrote:
             | None of this either/or reasoning is correct, in my
             | experience. In practice, I write tests both before and
             | after implementation, for different reasons. In practice,
             | my tests both test correctness, and of course they also
             | work as regression tests.
             | 
             | Writing before the fact allows you to test your mental
             | model of the interface, unspoiled by having the
             | implementation fresh in your mind. (Not entirely, since you
             | probably have some implementation ideas very early on.)
             | 
             | Writing tests after the fact is what you must do to explore
             | 1) weak points that occur to you as you implement, and 2)
             | bugs. After-the-fact testing also allows you to hone in on
             | vagueness in the spec, which may show up as (1) or (2).
        
             | adhesive_wombat wrote:
             | Also as an dependency canary: when your low level object
             | tests start demanding access to databases and config files
             | and networking, it's time for a think.
             | 
             | Also a passing unit test always provides up-to-date
             | implicit documentation on how to use the tested code.
        
         | fsdghrth3 wrote:
         | > This ultimately means, what most programmers intuitively
         | know, that it's impossible to write adequate test coverage up
         | front
         | 
         | Nobody out there is writing all their tests up front.
         | 
         | TDD is an iterative process, RED GREEN REFACTOR.
         | 
         | - You write one test.
         | 
         | - Write JUST enough code to make it pass.
         | 
         | - Refactor while maintaining green.
         | 
         | - Write a new test.
         | 
         | - Repeat.
         | 
         | I don't want this to come off the wrong way but what you're
         | describing shows you are severely misinformed about what TDD
         | actually is or you're just making assumptions about something
         | based on its name and nothing else.
        
           | _gabe_ wrote:
           | Reiterating the same argument in screaming case doesn't
           | bolster your argument. It feels like the internet equivalent
           | of a real life debate where a debater thinks saying the same
           | thing LOUDER makes a better argument.
           | 
           | > - You write one test
           | 
           | Easier said than done. Say your task is to create a low level
           | audio mixer which is something you've never done before.
           | Where do you even begin? That's the hard part.
           | 
           | Some other commenters here have pointed out that exploratory
           | code is different from TDD code, which is a much better
           | argument then what you made here imo.
           | 
           | > I don't want this to come off the wrong way but what you're
           | describing shows you are severely misinformed about what TDD
           | actually is or you're just making assumptions about something
           | based on its name and nothing else.
           | 
           | Instead of questioning the OP's qualifications, perhaps you
           | should hold a slightly less dogmatic opinion. Perhaps OP is
           | familiar with this style of development, and they've run into
           | problem firsthand when they've tried to write tests for an
           | unknown problem domain.
        
             | rileymat2 wrote:
             | > Some other commenters here have pointed out that
             | exploratory code is different from TDD code, which is a
             | much better argument then what you made here imo.
             | 
             | I find that iterating on tests in exploratory code makes
             | for an excellent driver to exercise the exploration. I
             | don't see the conflict between the two, except I am not
             | writing test cases to show correctness, I am writing them
             | to learn. To play with the inputs and outputs quickly.
        
             | nfhshy68 wrote:
             | I don't think GP was questioning their qualifications. Its
             | exceedingly clear from OPs remarks they don't know what TDD
             | is and haven't even read the article because it covers all
             | this. In detail.
        
           | gjulianm wrote:
           | > - You write one test.
           | 
           | > - Write JUST enough code to make it pass.
           | 
           | Those two steps aren't really trivial. Even just writing the
           | single test might require making a lot of design decisions
           | that you can't really make up-front without the code.
        
             | User23 wrote:
             | This acts as a forcing function for the software design.
             | That TDD requires you to think about properly separating
             | concerns via decomposition is a feature, not a bug. In my
             | experience the architectural consequences are of greater
             | value than the test coverage.
             | 
             | Sadly TDD is right up there with REST in being almost
             | universally misunderstood.
        
               | bluefirebrand wrote:
               | > Sadly TDD is right up there with REST in being almost
               | universally misunderstood.
               | 
               | That's a flaw in TDD and REST, not in the universe.
        
             | 0x457 wrote:
             | The first test could be as simple as method signature
             | check. Yes, you still have to make a design decision here,
             | but you have to make it either way.
        
               | giantrobot wrote:
               | Then you need to keep the test and signature in lock
               | step. Your method signature is likely to change as the
               | code evolves. I'm not arguing against tests but requiring
               | them too early generates a lot of extra work.
        
               | lucumo wrote:
               | Interesting. The method signature is usually the last
               | thing I create.
        
           | yibg wrote:
           | The first test is never the problem. The problem as OP
           | pointed out is after iterating a few times you realize you
           | went down the wrong track or the requirements have changed /
           | been clarified. Now a lot of the tests you iterated through
           | aren't relevant anymore.
        
           | happytoexplain wrote:
           | In my admittedly not-vast experience, a pattern going bad
           | because the implementer doesn't understand it is actually
           | only the implementer's fault a minority of the time, and is
           | the fault of the pattern the majority of the time. This is
           | because a pattern _making sense_ to an implementer requires
           | work from both sides, and which side is slacking can vary.
           | Sometimes the people who get it and like it tend to
           | purposefully overlook this pragmatic issue because  "you're
           | doing it wrong" seems like a golden bullet to critiques.
        
           | DoubleGlazing wrote:
           | In my experience the write a new test bit is where it all
           | falls down. It's too easy to skimp out on that when there are
           | deadlines to hit or you are short staffed.
           | 
           | I've seen loads of examples where the tests haven't been
           | updated in years to take account of new functionality. When
           | that happens you aren't really doing TDD anymore.
        
             | yaccz wrote:
             | That's an issue of bad engineering culture, not TDD.
        
           | unrealhoang wrote:
           | How to write that one test without the iterative design
           | process? That's something always missing from the TDD guides.
        
             | apalumbi wrote:
             | TDD is not a testing process. It is a design process. The
             | tests are a secondary and beneficial artifact of the well
             | designed software that comes from writing a test first.
        
           | Supermancho wrote:
           | Writing N or 1 tests N times, depending on how many times I
           | have to rewrite the "unit" for some soft idea of
           | completeness. After the red/green 1 case, it necessarily has
           | to expand to N cases as the unit is rewritten to handle the
           | additional cases imagined (boundary, incorrect inputs,
           | exceptions, etc). Now I see that I could have created
           | optimizations in the method and rewrite it again and leverage
           | the existing red/green.
           | 
           | Everyone understands the idea, it's just a massive time sink
           | for no more benefit than a test-after methodology provides.
        
             | fsdghrth3 wrote:
             | See my other comment below. I don't recommend doing it all
             | the time specifically because with experience you can often
             | skip a lot of the rgr loop.
             | 
             | > Everyone understands the idea, it's just a massive time
             | sink for no more benefit than a test-after methodology
             | provides.
             | 
             | This is not something I agree with. In my experience, when
             | TDD is used you come up with solutions to problems that are
             | better than what you'd come up with otherwise and it
             | generally takes much less time overall.
             | 
             | Writing tests after ensures your code is testable. Writing
             | your tests first ensures you only have to write your code
             | once to get it under test.
             | 
             | Again, you don't always need TDD and applying it when you
             | don't need it will likely be a net time sink with little
             | benefit.
        
         | happytoexplain wrote:
         | >Unfortunately most software is just not well defined up front.
         | 
         | This is exactly how I feel about TDD, but it always feels like
         | you're not supposed to say it. Even in environments where
         | features are described, planned, designed, refined, written as
         | ACs, and then developed, there are _still_ almost always pivots
         | made or holes filled in mid-implementation. I feel like TDD is
         | not for the vast majority of software in practice - it seems
         | more like something useful for highly specialist contexts with
         | extremely well defined objective requirements that are made by,
         | for, and under engineers, not business partners or consumers.
        
           | ethbr0 wrote:
           | I forget which famous Unix personality the quote / story
           | comes from, but it amounts to _" The perfect program is the
           | one you write after you finish the first version, throw it in
           | the garbage, and then handle in the rewrite all the things
           | you didn't know that you didn't know."_
           | 
           | That rings true to my experience, and TDD doesn't add much to
           | that process.
        
             | jonstewart wrote:
             | Ah, but it's the _third_ version, because of Second System
             | Effect. So, really, plan to throw two away.
             | https://en.wikipedia.org/wiki/Second-system_effect
        
               | [deleted]
        
             | lkrubner wrote:
             | Whoever said that specific quote, it is a paraphrase of a
             | point that Alan Kay has been making since the late 1970s.
             | His speeches, in which he argued in favor of SmallTalk and
             | dynamic program, make the point over and over again. I
             | believe he said almost exactly the words you are quoting.
        
             | sdevonoes wrote:
             | Exactly. I write my programs/systems a few times. Each time
             | I discard the previous version and start from scratch. I
             | end up writing code that's easy to test and easy to swap
             | parts if needed. I also know what TDD brings to the table.
             | On top of that I have over 15 years of professional
             | experience... so I usually know how to write software that
             | complies with what we usually call "good code", so TDD
             | offers me zero help.
             | 
             | For more junior engineers, I think TDD can help, but once
             | you "master" TDD, you can throw it out of the window:
             | "clean code" will come out naturally without having to
             | write tests first.
        
             | michaelchisari wrote:
             | Screenwriters encourage a "vomit draft" -- A draft that is
             | not supposed to be _good_ but just needs to exist to get
             | all the necessary parts on the page. Then a writer can
             | choose to either fix or rewrite, but having a complete
             | presentation of the story is an important first step.
             | 
             | I've advocated the same for early projects or new features.
             | Dump something bad and flawed and inefficient, but which
             | still accomplishes what you want to accomplish. Do it as
             | fast as possible. This is your vomit draft.
             | 
             | I strongly believe that the amount a team could learn from
             | this would be invaluable and would speed up the development
             | process, even if every single line of code had to be
             | scrapped and rebuilt from scratch.
        
             | RexM wrote:
             | I'm almost positive Joe Armstrong has some version of this
             | quote. I couldn't find it, though.
        
             | pricechild wrote:
             | Related: https://wiki.c2.com/?PlanToThrowOneAway ?
        
             | FridgeSeal wrote:
             | Normalise early-stage re-writes.
             | 
             | (Only slightly joking)
        
             | pez_dev wrote:
             | I've heard something similar once, but I don't remember
             | where:
             | 
             | "Write it three times. The first time to understand the
             | problem, the second time to understand the solution and the
             | third time to ship it."
        
             | dgb23 wrote:
             | Well except for the second step where you aim precisely.
        
         | [deleted]
        
         | AtlasBarfed wrote:
         | Yeah, TDD has way too much "blame the dev" for the usual
         | cavalcade of organizational software process failures.
        
         | wodenokoto wrote:
         | This rings very true for me.
         | 
         | I write tdd when doing advent of code. And it's not that I set
         | out to do it or to practice it or anything. It just comes very
         | natural to small, well defined problems.
        
         | yoden wrote:
         | > test coverage gets in the way of the iterative design
         | process. In theory TDD should work as part of that iterative
         | design, but in practice it means a growing collection of broken
         | tests and tests for parts of the program that end up being
         | completely irrelevant.
         | 
         | So much of this is because TDD has become synonymous with unit
         | testing, and specifically solitary unit testing of minimally
         | sized units, even though that was often not the original intent
         | of the ideators of unit testing. These tests are tightly
         | coupled to your unit decomposition. Not the unit implementation
         | (unless they're just bad UTs), but the decomposition of the
         | software into which units/interfaces. Then the decomposition
         | becomes very hard to change because the tests are exactly
         | coupled to them.
         | 
         | If you take a higher view of unit testing, such as what is
         | suggested by Martin Fowler, a lot of these problems go away.
         | Tests can be medium level and that's fine. You don't waste a
         | bunch of time building mocks for abstractions you ultimately
         | don't need. Decompositions are easier to change. Tests may be
         | more flaky, but you can always improve that later once you've
         | understood your requirements better. Tests are quicker to
         | write, and they're more easily aligned with actual user
         | requirements rather than made up unit boundaries. When those
         | requirements change, it's obvious which tests are now useless.
         | Since tests are decoupled from the lowest level implementation
         | details, it's cheap to evolve those details to optimize
         | implementation details when your performance needs change.
        
         | twic wrote:
         | No, this is nonsense. You don't write the test coverage up
         | front!
         | 
         | You think of a small chunk of functionality you are comfident
         | about, write the tests for that (some people say just one test,
         | i am happy with up to three or so), then write the
         | implementation that makes those tests pass. Then you refactor.
         | Then you pick off another chunk and 20 GOTO 10.
         | 
         | If at some point it turns out your belief about the
         | functionality was wrong, fine. Delete the tests for that bit,
         | delete the code for it, make sure no other tests are broken,
         | refactor, and 20 GOTO 10 again.
         | 
         | The process of TDD is _precisely about_ writing code when you
         | don 't know how the program is going to work upfront!
         | 
         | On the other hand, implementing a well-defined spec is when TDD
         | is much less useful, because you have a rigid structure to work
         | to in both implementation and testing.
         | 
         | I think the biggest problem with TDD is that completely
         | mistaken ideas about it are so widespread that comments like
         | this get upvoted to the top even on HN.
        
         | no_wizard wrote:
         | thats kind of TDDs core point. you don't really know upfront,
         | so you write tests to validate what you can define up front,
         | and through that, you should find you discover other things
         | that were _not_ accounted for, and the cycle continues, until
         | you have a working system that satisfies the requirements. Then
         | _all_ those tests serve as a basic form of documentation  &
         | reasonable validation of the software so when further
         | modifications are desired, you don't break what you already
         | know to be reasonably valid.
         | 
         | Therefore, TDD's secret sauce is in concretely forcing
         | developers to think through requirements, mental models etc.
         | and quantify them in some way. When you hit a block, you need
         | to ask yourself whats missing, then figure out, and continue
         | onward, making adjustments along the way.
         | 
         | This is quite malleable to unknown unknowns etc.
         | 
         | I think the problem is most people just aren't chunking down
         | the steps of creating a solution enough. I'd argue that the
         | core way of approaching TDD fights most human behavioral
         | traits. It forces a sort of abstract level of reasoning about
         | something that lets you break things down into reasonable
         | chunks.
        
           | pjmlp wrote:
           | I doubt ZFS authors would have succeeded designing it with
           | TDD.
        
             | no_wizard wrote:
             | What's inherent about this problem that wouldn't benefit
             | from chunking things into digestible, iterative parts that
             | lend themselves nicely to the TDD approach as I described?
        
         | julianlam wrote:
         | I often come up with test cases (just the cases, not the actual
         | logic) _while_ writing the feature. However I am never in the
         | mood to context switch to write the test, so I 'll do the bare
         | minimum. I'll flip over to the test file and write the `it()`
         | boilerplate with the one-line test title and flip back to
         | writing the feature.
         | 
         | By the time I've reached a point where the feature can actually
         | be tested, I end up with a pretty good skeleton of what tests
         | _should_ be written.
         | 
         | There's a hidden benefit to doing this, actually. It frees up
         | your brain from keeping that running tally of "the feature
         | should do X" and "the feature should guard against Y", etc.
         | (the very items that go poof when you get distracted, mind you)
        
         | BurningFrog wrote:
         | Agile, as the name hints, was developed precisely to deal with
         | ever changing requirements. In opposition to various versions
         | of "first define the problem precisely, then implement that in
         | code, and then you're done forever".
         | 
         | So the TDD OP describes here is not an Agile TDD.
         | 
         | The normal TDD process is:                   1. add one test
         | 2. make it (and all others) pass         3. maybe refactor so
         | code is sane         4. back to 1, unless you're done.
         | 
         | When requirements change, you go to 1 and start adding or
         | changing tests, iterate until you're done.
        
           | tra3 wrote:
           | Exactly. Nobody's on board with paying at least twice as much
           | for software though. But that's what you get when things
           | change and you have to refactor BOTH your code AND your
           | tests.
        
             | mikkergp wrote:
             | But what is your process for determining code is correct,
             | and is it really faster and more reliably than writing
             | tests? Sheer force of will? Running it through your brain a
             | few times? Getting peer review? I often find tests that all
             | things being equal tests are just the fastest way to review
             | my own work, even if I hate writing them sometimes.
        
               | karmelapple wrote:
               | Tests are literally where our requirements live. To not
               | have automated tests would be to not have well-defined
               | requirements.
        
               | cogman10 wrote:
               | To have automated tests does not mean you have well-
               | defined requirements.
               | 
               | I 100% agree with capturing requirements in tests.
               | However, I argue that TDD does not cause that to happen.
               | 
               | I'd even make a stronger statement. Automated tests that
               | don't capture a requirement should be deleted. Those
               | sorts of tests only serve to hinder future refactoring.
               | 
               | A good test for a sort method is one that verifies data
               | is sorted at the end of it. A bad test for a sort method
               | is one that checks to see what order elements are visited
               | in the sorting process. I have seen a lot of the "element
               | order visit" style tests but not a whole lot of "did this
               | method sort the data" style tests.
        
               | BurningFrog wrote:
               | As a semi-avid TDD:er I agree about what's a good test.
               | 
               | I don't see the connection between TDD and your bad tests
               | examples though.
               | 
               | I would test a sort method just the way you describe,
               | using TDD.
        
               | P5fRxh5kUvp2th wrote:
        
             | 0x457 wrote:
             | To be fair, you have to refactor your code and tests when
             | things change anyway, regardless of the order they were
             | written.
        
             | randomdata wrote:
             | Public interfaces should change only under extreme
             | circumstances, so needing to refactor legacy tests should
             | be a rare event. Those legacy tests will help ensure that
             | your public interface hasn't changed as it is extended to
             | support changing requirements. You should not be testing
             | private functions, leaving you free to refactor endlessly
             | behind the public interface. What goes on behind the public
             | interface is to be considered a black box. The code will
             | ultimately be tested by virtue of the public interface
             | being tested.
        
               | kcb wrote:
               | Assuming any piece of code won't or shouldn't be changed
               | feels wrong. If you're a library developer you have to
               | put processes in place to account for possible change. If
               | not those public interfaces are just as refactorable as
               | any other code imo. Nothing would be worse than not being
               | able to implement a solution in the best manner because
               | someone decided on an interface a year ago and enshrined
               | it in unit tests.
        
               | randomdata wrote:
               | Much, much worse is users having to deal with things
               | randomly breaking after an update because someone decided
               | they could make it better.
               | 
               | That's not to say you can't seek improvement. The public
               | interface can be expanded without impacting existing
               | uses. If, for example, an existing function doesn't
               | reflect your current view of the world add a new one
               | rather than try to jerry-rig the old one. If the new
               | solutions are radically different such that you are
               | essentially rewriting your code from scratch, a clean
               | break is probably the better route to go.
               | 
               | If you are confident that existing users are no longer
               | touching your legacy code, remove it rather than refactor
               | it.
        
               | __ryan__ wrote:
               | Oh, I didn't consider this. Problem solved then.
        
             | AnimalMuppet wrote:
             | But how much do you want to pay for bugs?
             | 
             | Things change. You change the code in response. What broke?
             | Without the tests, _you don 't know_.
             | 
             | "Things change" include "you fixed a bug". Bug fixes can
             | create new bugs (the only study I am familiar with says
             | 20-50% probability). Did your bug fix break anything else?
             | How do you know? With good test coverage, you just run the
             | tests. (Yes, the tests are never complete enough. They can
             | be complete enough that they give fairly high confidence,
             | and they can be complete enough to point out a surprising
             | number of bugs.)
             | 
             | Does that make you pay "at least twice"? No. It makes you
             | pay, yes, but you get a large amount of value back in terms
             | of _actually working code_.
        
             | ivan_gammel wrote:
             | That can be an acceptable risk actually and it does quite
             | often. There are two conceptually different phases in SDLC:
             | verification that proves implementation is working
             | according to spec and validation that proves that spec
             | matches business expectations. Automated tests work on
             | first phase, minimizing the risk that when reaching the
             | next phase we will be validating the code that wasn't
             | implemented according to spec. If that risk is big enough,
             | accepting the refactoring costs after validation may make a
             | lot of sense.
        
             | BurningFrog wrote:
             | The only way to avoid that is to not have tests.
        
             | no_wizard wrote:
             | Is it twice as much? I think unsound architectural
             | practices in software is the root cause of this issue, not
             | _red green refactor_.
             | 
             | You aren't doing "double the work" even though it seems
             | that way on paper, unless the problem was solved with
             | brittle architectural foundations and tightly coupled
             | tests.
             | 
             | At the heart of this problem is most developers don't quite
             | grasp boundary separation intuitively I think.
        
         | randomdata wrote:
         | TDD isn't concerned with how your program works. In fact,
         | implementation details leaking into your tests can become quite
         | problematic, including introducing the problems you speak of.
         | TDD is concerned with describing what your program should
         | accomplish. If you don't know what you want to accomplish, what
         | are you writing code for?
        
           | giantrobot wrote:
           | The issue is _what_ you want to accomplish is often tightly
           | coupled with _how_ it is accomplished. In order to have a
           | test for what it needs to contain the context of how.
           | 
           | As a made up example. The "what" of the program is to take in
           | a bunch of transactions and emit daily summaries. That's a
           | straight forward "what". It however leaves tons of questions
           | unanswered. Where does the data come from and in what format?
           | Is it ASCII or Unicode? Do we control the source or is it
           | from a third party? How do we want to emit the summaries?
           | Printed to a text console? Saved to an Excel spreadsheet?
           | What version of Excel? Serialized to XML or JSON? Do we have
           | a spec for that serialized form? What precision do we need to
           | calculate vs what we emit?
           | 
           | So the _real_ "what" is: take in transaction data encoded as
           | UTF-8 from a third party provider which lives in log files on
           | the file system without inline metadata then translate the
           | weird date format with only minute precision and lacking an
           | explicit time zone and summarize daily stats to four decimal
           | places but round to two decimal places for reporting and emit
           | the summaries as JSON with dates as ISO ordinal dates and
           | values at two decimal places saved to an FTP server we don't
           | control.
           | 
           | While waiting for all that necessarily but often elided
           | detail you can either start writing some code with unit test
           | or wait and do no work until you get a fully fleshed out spec
           | that can serve as the basis for writing tests. Most
           | organizations want to start work even while the final specs
           | of the work are being worked on.
        
             | randomdata wrote:
             | _> Most organizations want to start work even while the
             | final specs of the work are being worked on._
             | 
             | Is that significant? Your tests can start to answer these
             | unanswered questions before you ever get around to writing
             | implementation. Suppose you thought you wanted to write
             | data in ASCII format. But then you write some test cases
             | and realize that you actually need Unicode symbols. Now you
             | know what your implementation needs to do.
             | 
             | Testing _is_ the spec. The exact purpose of testing, which
             | in fairness doesn 't have the greatest name, is to provide
             | documentation around what the program does. That it is
             | self-verifying is merely a nice side effect. There is no
             | need for all the questions to be answered while writing the
             | spec (a.k.a. tests). You learn about the answers as you
             | write the documentation. The implementation then naturally
             | follows.
        
         | AnimalMuppet wrote:
         | > In theory TDD should work as part of that iterative design,
         | but in practice it means a growing collection of broken tests
         | and tests for parts of the program that end up being completely
         | irrelevant.
         | 
         | If you have "a growing collection of broken tests", that's
         | _not_ TDD. That 's "they told us we have to have tests, so we
         | wrote some, but we don't actually want them enough to maintain
         | them, so instead we ignore them".
         | 
         | Tests help _massively_ with iterating a design on a partly-
         | implemented code base. I start with the existing tests running.
         | I iterate by changing some parts. Did that break anything else?
         | How do I know? Well, I run the tests. Oh, those four tests
         | broke. That one is no longer relevant; I delete it. That other
         | one is testing behavior that changed; I fix it for the new
         | reality. Those other two... why are _they_ breaking? Those are
         | showing me unintended consequences of my change. I think _very_
         | carefully about what they 're showing me, and decide if I want
         | the code to do that. If yes, I fix the test; if not, I fix the
         | code. At the end, I've got working tests again, and I've got a
         | solid basis for believing that the code does what I think it
         | does.
        
       | littlestymaar wrote:
       | It looks like the human brain is wired up in a way that can turn
       | anything into a religion
        
       | ttctciyf wrote:
       | IMO, a _lot_ of sage advice about TDD, well informed by years of
       | practice, is in two Ian Cooper NDC talks, his controversial-at-
       | the-time  "TDD, Where Did It All Go Wrong?"[1] and, seven years
       | later, "TDD Revisited"[2].
       | 
       | The blurb from the latter:
       | 
       | > In this talk we will look at the key Fallacies of Test-Driven
       | Development, such as 'Developers write Unit Tests', or 'Test
       | After is as effective as Test First' and explore a set of
       | Principles that let us write good unit tests instead. Attendees
       | should be able to take away a clear set of guidelines as to how
       | they should be approaching TDD to be successful. The session is
       | intended to be pragmatic advice on how to follow the ideas
       | outlined in my 2013 talk "TDD Where Did it All Go Wrong"
       | 
       | The talks focus on reasons to avoid slavish TDD and advocate for
       | the benefits of judiciously applying TDD's originating
       | principles.
       | 
       | 1: https://www.youtube.com/watch?v=EZ05e7EMOLM
       | 
       | 2: https://www.youtube.com/watch?v=vOO3hulIcsY
        
       | GuB-42 wrote:
       | I never really understood how TDD can make software better,
       | except for one thing: it forces people to write tests. But that's
       | just a discipline thing: tests are boring and development is fun,
       | you have to deserve your fun by doing the boring part first.
       | 
       | It also makes cutting corners more difficult, because it is
       | possible to have (sort of) working software without testing, but
       | you can't have working software if the only thing you have are
       | failing tests (the important first step in TDD). Most TDD people
       | probably thing of that as a positive, I don't. Sometimes, cutting
       | corners is the right thing to do, sometimes, you actually need to
       | write the code to see if it is viable, and if it is not, well,
       | you wasted both the tests and the code, not just the code.
       | 
       | But I don't think it is the only problem with TDD. The main
       | problem, I think, is right there in the name "test driven". With
       | a few exceptions, tests shouldn't drive development, the user
       | needs should. Test driven development essentially means: write
       | tests based on the users need, and then write code based on the
       | tests. It means that if your tests are wrong and your code passes
       | the tests, the code will be wrong, 100% chance, and you won't
       | notice because by focusing on the tests, you lost track of the
       | user needs. It is an extra level of indirection, and things get
       | lost in translation.
       | 
       | Another issue I have noticed personally: it can make you write
       | code no one understands, not even yourself. For example, your
       | function is supposed to returned a number, but after testing, you
       | notice that are always off by +1, the solution: easy, subtract 1
       | to the final value. Why? dunno, it passes the tests, it may even
       | work, but no one understands, and it may bite you later. Should I
       | work like that? Of course not, but this is a behavior that is
       | encouraged by the rapid feedback loop that TDD permits. I speak
       | from experience, I wrote some of my worst code using that method.
       | 
       | If you want an analogy of why I am not a fan of TDD: if you are a
       | teacher and give your students the test answers before you start
       | your lesson, most will probably just study the test and not the
       | lesson, and as a consequence they will most likely end up with
       | good grades but poor understanding of the subject.
        
       | elboru wrote:
       | One of the biggest issues with our industry is the ambiguity in
       | our definitions. The author mentions "unit tests" as if it was a
       | well defined term. But some people understand "unit" as a class,
       | other understand it as a module, others as a behavior. Some
       | TDDers write unit tests that would be considered "integration
       | tests" by other developers.
       | 
       | Then we have TDD itself, there are at least two different schools
       | of TDD. What the author calls "maximal TDD" sounds like the
       | mockist school to me. Would his criticism also apply to the
       | classical school? I'm sincerely curious.
       | 
       | If we don't have a common ground, communication becomes really
       | difficult. Discussion and criticism becomes unfruitful.
        
       | gregmac wrote:
       | The author defines two types of TDD: "weak TDD" and "strong TDD".
       | I'd argue there's another, though I'm not sure what to call it --
       | "Pragmatic TDD" perhaps? What I care about is having unit tests
       | that cover the complicated situations that cause bugs. I think
       | one of the main problems with TDD is its proponents focus so much
       | on the process as opposed to the end result.
       | 
       | The way I practice "pragmatic TDD" is to construct my code in a
       | way that allows it to be tested. I use dependency injection. I
       | prefer small, static methods when possible. I try not to add
       | interfaces unless actually needed, and I also try to avoid
       | requiring mocks in my unit tests (because I find those tests
       | harder to write, understand, and maintain).
       | 
       | Notably: I explicitly don't test "glue code". This includes stuff
       | in startup -- initializing DI and wiring up config -- and things
       | like MVC controllers. That code just doesn't have the cost-
       | benefit to writing tests: it's often insanely difficult to test
       | (requiring lots of mocks or way over-complicated design) and it's
       | obvious when broken as the app just won't work at all.
       | Integration or UI automation tests are a better way to check this
       | if you want to automate it.
       | 
       | I strive to just test algorithm code. Stuff with math, if/else
       | logic, and parsing. I typically write the code and tests in
       | parallel. Sometimes I start writing what I think is a simple glue
       | method before realizing it has logic, so I'll refactor it to be
       | easy to test: move the logic out to its own method, make it
       | static with a couple extra parameters (rather than accessing
       | instance properties), move it to its own class, etc.
       | 
       | Sometimes I write tests first, sometimes last, but most often I
       | write a few lines of code before I write the first tests. As I
       | continue writing the code I think up a new edge case and go add
       | it as a test, and then usually that triggers me to think of a
       | dozen more variations which I add even if I don't implement them
       | immediately. I try not to have broken commits though, so I'll
       | sometimes comment out the broken ones with a `TODO`, or
       | interactive rebase my branch and squash some stuff together. By
       | the time anyone sees my PR everything is passing.
       | 
       | I think the important thing is: if you look at my PR you can't
       | tell what TDD method I used. All you see is I have a bunch of
       | code that is (hopefully) easy to understand and has a _lot_ of
       | unit tests. If you want to argue some (non-tested) code I added
       | should have tests, I 'm happy to discuss and/or add tests, but
       | your argument had better be stronger than "to get our code
       | coverage metric higher".
       | 
       | Whether I did "strong red-green-refactor TDD" or "weak TDD" or
       | "pragmatic TDD" the result is the same. I'd argue caring about
       | _how_ I got there is as relevant as caring about what model of
       | keyboard I used to type it.
        
       | geodel wrote:
       | The way I see, all this cultish crap: Agile, TDD, Scrum, Kanban,
       | XP etc..etc works when essentially same thing is done nth time. I
       | have seen plenty of _success_ with these when same project is
       | roughly repeated for many different clients.
       | 
       | It is also no surprise these terms have mostly to do with IT or
       | related consulting and not really about engineering endeavor. In
       | my first hand experience when I worked at engineering department
       | there was whole lot of work done with almost non-existent
       | buzzword bullshit. And later on with merger etc it is now an IT
       | department so there is endless money on process training,
       | resources, scrum masters and so on but little money is left for
       | half decent computer setup.
       | 
       | Outside work I have seen this in my cooking, first time new dish
       | is a hustle but in future iterations I would create little in-
       | brain task list tickets for my own processing. Doing this in
       | jackasstic consulting framework way would turn 1 hr worth of
       | butter chicken recipe into 1 month work of taste feature
       | implementation sprints.
        
       | Sohcahtoa82 wrote:
       | I got turned off from TDD in my senior year getting my CS degree.
       | 
       | During class, the teacher taught us TDD, using the Test-Code-
       | Refactor loop. Then he wanted us to write an implementation of
       | Conway's Game of Life using TDD. As the students were doing it,
       | he was doing it as well.
       | 
       | After the lesson but before the exercise, I thought "This looks
       | tedious and looks like it would make coding take far longer than
       | necessary" and just wrote the Game first, then wrote a couple
       | dozen tests. Took me probably about 45 minutes.
       | 
       | At that point, I looked up on the projector and saw the teacher
       | had barely done much more than having a window, a couple buttons,
       | and some squares drawn on it, and a dozen tests making sure the
       | window was created, buttons were created, clicking the button
       | called the function, and that the calls to draw squares
       | succeeded.
       | 
       | What really bothers me about "true" TDD (and TFA points this
       | out), is that if you're writing _bare minimum code_ to make a
       | unit test pass, then it will likely be incorrect. Imagine writing
       | an abs() function, and your first test is  "assert (abs(-1) ==
       | 1)". So you write in your function "if (i == -1) return 1".
       | Congrats, you wrote the bare minimum code. Tadaa! TDD!
        
         | gnulinux wrote:
         | I'm really sorry but these kind of "I mocked a simple program
         | in 45 mins when my TDD practicing counterpart took longer"
         | comments mean nothing. The code is never written once and done.
         | If you were maintaining that code for the next 6 years and
         | there was no rush to ship it, it absolutely doesn't matter how
         | fast it was written in the first place. I would much rather
         | take code that was written better in 6 hours, than bad code
         | written in 45 mins. I'm not saying you wrote bad code, but time
         | to ship, in general, rarely matters in this context.
        
       | metanonsense wrote:
       | I always liked the discussion "Is TDD dead" between David
       | Heinemeier Hansson (of Ruby on Rails and Basecamp fame) and Kent
       | Beck. DHH arguing against, Kent Beck obviously in favor of TDD.
       | Martin Fowler is moderator and the discussion is very nuanced and
       | slowly identifies areas where TDD has its benefits and where it
       | should be rather avoided. https://martinfowler.com/articles/is-
       | tdd-dead/
        
       | tippytippytango wrote:
       | The main reason TDD hasn't caught on is there's no evidence it
       | makes a big difference in the grand scheme of things. You can't
       | operationalize it at scale either. There is no metric or
       | objective test that you can run code through that will give you a
       | number in [0, 1] that tells you the TDDness of the code. So if
       | you decide to use TDD in your business, you can't tell the degree
       | of compliance with the initiative or correlation with any
       | business metrics you care about. The customers can't tell if the
       | product was developed with TDD.
       | 
       | Short of looking over every developer's shoulder, how do you
       | actually know the extent to which TDD is being practiced as
       | prescribed? (red, green, refactor) Code review? How do you
       | validate your code reviewer's ability to identify TDD code? What
       | if someone submits working tested code; but, you smell it's not
       | TDD, what then? Tell them to pretend they didn't write it and
       | start over with the correct process? What part of the development
       | process to you start to practice it? Do you make the R&D people
       | do it? Do you make the prototypers do it? What if the prototype
       | got shipped into production?
       | 
       | Because of all this, even if the programmers really do write good
       | TDD code, the business people still can't trust you, they still
       | have to QA test all your stuff. Because they can't measure TDD,
       | they have no idea when you are doing it. Maybe you did TDD for
       | the last release; but, are starting to slip? Who knows, just QA
       | the product anyways.
       | 
       | I like his characterization of TDD as a technique. That's exactly
       | what it is, a tool you use when the situation calls for it. It's
       | a fantastic technique when you need it.
        
         | [deleted]
        
         | samatman wrote:
         | One can enforce the use of TDD through pair programming with
         | rotation, as Pivotal does.
         | 
         | I don't know that Pivotal (in particular) does pair programming
         | so that TDD is followed, I do know that they (did) follow TDD
         | and do everything via pair programming. I'm agnostic as to
         | whether it's a good idea generally, it's not how I want to live
         | but I've had a few associates who really liked it.
        
         | mehagar wrote:
         | You make a good point about not being able to enforce that TDD
         | is actually followed. The best we could do is check that unit
         | tests exist at all.
         | 
         | In theory, if TDD really reduces the number of bugs and speeds
         | up development you would see if reflected in those higher level
         | metrics that impact the customer.
        
           | agloeregrets wrote:
           | > In theory, if TDD really reduces the number of bugs and
           | speeds up development you would see if reflected in those
           | higher level metrics that impact the customer.
           | 
           | The issue is that many TDD diehards believe that bugs and
           | delays are made by coders who did not properly qualify their
           | code before they wrote it.
           | 
           | In reality, bugs and delays are a product of an organization.
           | Bad coders can write bad tests that pass bad code just fine.
           | Overly short deadlines will cause poor tests. Furthermore,
           | many coders reply that they have trouble with the task-
           | switching nature of TDD. To write a complex function, I will
           | probably break it out into a bunch of smaller pure functions.
           | In TDD that may require you to either: 1. Write a larger
           | function that passes the test and break it down. 2. Write a
           | test that validates that the larger function calls other
           | functions and then write tests that define each smaller
           | function.
           | 
           | The problem with these flows is that 1: Causes rework and 2
           | ends up being like reading a book out of order, you may get
           | to function 3 and realize that function 2 needed additional
           | data and now you have to rewrite your test for 2. Once again
           | rework. I'm sure there are some gains in some spaces but
           | overall it seems that the rework burns those gains off.
        
       | shadowgovt wrote:
       | Where I come from, unit-test-driven-development tends to be a
       | waste of resources. The interfaces will change so much during
       | development that anything you write initially is guaranteed to be
       | torn up. The one exception is if you're writing an interface that
       | crosses teams; for "ship your org chart" reasons, we not only
       | can, but must assume that interface is stable enough to mock and
       | test against (and a knife-fight is necessary if it isn't).
       | 
       | However, getting the client to agree that their design feature is
       | satisfied by a specific set of steps, then writing software that
       | satisfies that request, _is_ a form of test-driven-development
       | and I support it.
        
       | ajkjk wrote:
       | I feel like TDD's usefulness depends very much on what type of
       | code you're writing.
       | 
       | If it's C libraries that fiddle that do lots of munging of
       | variables, like positioning UI or fiddling with data
       | structures... then yes, totally, it has well-defined requirements
       | that you can assert in tests before you write it.
       | 
       | If it's like React UI code, though, get out of here. You
       | shouldn't even really be writing unit tests for most of that
       | (IMO), much less blocking on writing it first. It'll probably
       | change 20 times before it's done anyway; writing the tests up
       | from is going to just be annoying.
        
         | mal-2 wrote:
         | Definitely agree. In the time it took you to mock your the
         | state management, the backend endpoints, and the browser
         | localStorage to isolate your unit, you probably could have
         | written it in Playwright end-to-end with nothing mocked. The
         | you'd actually know if your react code broke when the API
         | changed, instead of pretending your out of date mock is still
         | in sync.
        
       | [deleted]
        
       | brightball wrote:
       | As with anything, there's going to be a group of strict
       | adherents, strong opposition and the set of people who have used
       | it enough to only apply it where useful.
       | 
       | It's definitely useful, but those strongly opposed often won't
       | use it at all unless it mandated which tends to lead to strict
       | adherence policies at a lot of companies.
        
       | [deleted]
        
       | dbrueck wrote:
       | It's all about tradeoffs. I've done a few decades of non-TDD with
       | a middle period of ~5 years of zealot-level commitment to TDD,
       | and as a rule of thumb, the cost is _usually_ not worth the
       | benefit.
       | 
       | Some hidden/unexpected side effects of TDD include the often
       | extremely high cost of maintaining the tests once you get past
       | the simple cases, the subtle incentive to not think too
       | holistically about certain things, and the progression as a
       | developer in which you naturally improve and stop writing the
       | types of bugs that basic tests are good at catching but which you
       | continue to write anyway (a real benefit, sure, but one that
       | further devalues the tests). The cost of creating a test that
       | would have caught the really "interesting" bugs is often
       | exorbitant, both up front and to maintain.
       | 
       | The closest thing I've encountered to a reliable exception is
       | that having e.g. a comprehensive suite of regression tests is
       | _really_ great when you are doing a total rewrite of a library or
       | critical routine. But even that doesn 't necessarily mean that
       | the cost of creating and maintaining that test suite was worth
       | it, and so far every time I've encountered this situation, it's
       | always been relatively easy to amass a huge collection of real
       | world test data, which not only exercises the code to be replaced
       | but also provides you a high degree of confidence that the
       | rewrite is correct.
        
       | gravytron wrote:
       | TDD helps facilitates the process of building up of confidence in
       | the product. However, the value it adds is contextual in the
       | sense that if a product is not well defined and at a certain
       | point of maturity then it may not be helpful to shift gears and
       | adopt TDD.
       | 
       | But fundamentally no one should ever be trying to merge code that
       | hasn't been unit tested. If they are, that is a huge problem
       | because it shows arrogance, ignorance, willingness to kick-the-
       | can-down-the-road, etc.
       | 
       | From an engineering perspective the problem is simple: if you're
       | not willing to test your solution then you have failed to
       | demonstrate that you understand the problem.
       | 
       | If you're willing to subsidize poor engineering then you're going
       | to have to come to terms with adopting TDD eventually, at some
       | stage of the project's lifecycle, because, you have created an
       | environment where people have merged untested code and you have
       | no way to guarantee to stakeholders that you're not blowing
       | smoke. More importantly, your users care. Because your users are
       | trusting you. And you should care most of all about your users.
       | They are the ones paying your bills. Be good to them.
        
       | jldugger wrote:
       | > You write more tests. If writing a test "gates" writing code,
       | you have to do it. If you can write tests later, you can keep
       | putting it off and never get around to it. This, IMO, is the
       | principle benefit of teaching TDD to early-stage programmers.
       | 
       | Early stage programmers and all stages of project manager.
        
       | danpalmer wrote:
       | Like almost every spectrum of opinions, the strongest opinions
       | are typically the least practical, and useful only in a
       | theoretical sense and for evolving the conversation in new
       | directions.
       | 
       | I think TDD has a lot to offer, but don't go in for the purist
       | approach. I like Free Software but don't agree with Stallman.
       | It's the same thing.
       | 
       | The author takes a well reasoned, mature, productive, engineering
       | focused approach, like the majority of people should be doing. We
       | shouldn't be applying the pure views directly, we should be
       | informed by them and figure out what we can learn for our own
       | work.
        
         | totetsu wrote:
         | But we need to use FDD to use the full spectrum of options.
        
         | discreteevent wrote:
         | This was the funny thing about extreme programming. I remember
         | reading the book when it came out. In it Kent Beck more or less
         | said that he came up with the idea because waterfall was so
         | entrenched that he thought the only way to move the dial back
         | to something more incremental was to go to the other extreme
         | end.
         | 
         | This took off like wildfire probably for the same reason that
         | we see extreme social movements/politics take off. People love
         | purity because it's so clean and tidy. Nice easy answers. If I
         | write a test for everything something good will emerge. No need
         | for judgement and hand wringing.
         | 
         | But the thing is that I think Kent Beck got caught up in this
         | himself and forgot the original intention. I could be wrong but
         | it seems like that.
        
           | ad404b8a372f2b9 wrote:
           | Increasingly I've been wondering whether these agile
           | approaches might be a detriment to most open source projects.
           | 
           | There is a massive pool of talented and motivated programmers
           | that could contribute to open source projects, much more
           | massive than any company's engineering dept, yet most
           | projects follow a power law where a few contributors write
           | all the code.
           | 
           | I think eschewing processes and documentation in favour of
           | pure programming centered development, where tests & code
           | serve as documentation and design tools, means the barrier to
           | entry is much higher, and onboarding new members is
           | bottlenecked by their ability to talk with the few main
           | contributors.
           | 
           | The most successful open source projects have a clear
           | established process for contributing and a lot of
           | documentation. But the majority don't have anything like
           | that, and that's only exacerbated by git hosting platforms
           | that put all their emphasis on code over process. I wonder
           | whether setting up new tools around git allowing for all
           | projects to follow the waterfall or a V-cycle might improve
           | the contribution inequality.
        
       | stonemetal12 wrote:
       | I am not a TDD person, but when you write some code you want to
       | see if it works. So you either write a unit test, or you plugin
       | your code and do the whole song and dance to get execution to
       | your new code.
       | 
       | I see TDD is REPL driven development for languages without a
       | REPL. It allows you to play with your code in a tighter feed back
       | loop, than you generally have without it.
        
         | JonChesterfield wrote:
         | It's closer to a repl with save state and replay. A repl will
         | get the code working faster than tests but doesn't easily allow
         | rechecking the same stuff later when things change (either your
         | code or the users of it). I haven't seen a repl with
         | save&replay but that might be a really efficient way to write
         | the unit tests.
        
           | sedachv wrote:
           | You just copy-and-paste the relevant input-output and there
           | is your test. There isn't a need for any extra tools when
           | using the REPL to come up with regression tests (obviously a
           | REPL cannot be used to do TDD).
        
       | gherkinnn wrote:
       | > Testable code is best code. Only TDD gets you there.
       | 
       | I smell a circular argument but can't quite put my finger on it.
       | 
       | > If it doesn't work for you, you're doing it wrong
       | 
       | Ah. Start with a nigh unattainable and self-justifying moral
       | standard, sell services to get people there, and treat any
       | deviation as heresy. How convenient. Reminds me of Scrum
       | evangelists. Or a cult.
       | 
       | TDD is a great tool for library-level code and in cases where the
       | details are known upfront and stable.
       | 
       | But I can't get it to work for exploratory work or anything
       | directly UI related. It traps me in a local optimum and has draws
       | my focus to the wrong places.
        
       | fasteddie31003 wrote:
       | The TDD tradition comes from dynamically typed languages. If you
       | write a Ruby or JavaScript function it's got a good chance of not
       | working the first time you run it. However, with statically typed
       | languages your function has a much better chance of running, if
       | it compiles. IMO TDD only makes sense for dynamically typed
       | languages.
        
       | zwieback wrote:
       | A lot of the software engineering approaches from that era
       | (refactoring, TDD, patterns) make more sense in the world I grew
       | up in: large pre-compiled code bases where everything other than
       | the base OS layer is under the engineer's control. If you have to
       | ship your SW as an installable which will end up on someone's
       | machine far away your mindset will be more defensive.
       | 
       | In this day and age of vastly distributed systems where
       | distribution and re-distribution is relatively cheap we can
       | afford to be a little less obsessive. Many exceptions still
       | exist, of course, I would think that the teams developing my
       | car's control system might warm up to TDD a bit more than someone
       | putting together a quickie web app.
        
         | buscoquadnary wrote:
         | I think you make an important point. It used to be I'd have to
         | worry about the OS layer, and that was it. Now I have half a
         | dozen layers running between my code and the actual die
         | executing instructions and as consequence of that I've lost a
         | considerable amount of control.
         | 
         | The funny thing is I end spending just as much time trying to
         | debug or figure out the other layers, looking at you AWS IAM,
         | that I don't feel I am that much more productive, I've just
         | taken what my code needed to do and scatterred it to the 4
         | winds. Now instead of dealing with an OS and the code I'm
         | fighting with docker, and a cloud service, and permissions and
         | network and a dozen other things.
         | 
         | Honestly this feels like the OOP hype era of Object Database
         | and Java EE all over again, just this time substitute OOP for
         | tooling.
        
       | agentultra wrote:
       | I've been in software development for over twenty years.
       | 
       | I have similar feelings about maximalism in a lot of areas.
       | 
       | Many organizations producing software today don't share many
       | values with me as an engineer. Startups aren't going to value
       | correctness, reliability, and performance nearly as much as an
       | established hardware company. A startup is stumbling around
       | trying to find a niche in a market to exploit. They will value
       | time to market above most anything else: quick, fast solutions
       | with minimal effort. Almost all code written in this context is
       | going to be sloppy balls of mud. The goal of the organization is
       | to cash out as fast as possible; the code only has to be
       | sufficient to find product-market fit and everyone riding the
       | coat-tails of this effort will tolerate a huge number of software
       | errors, performance issues, etc.
       | 
       | In my experience practicing TDD in the context of a startup is a
       | coping mechanism to keep the ball of mud going long enough that
       | we don't drown in errors and defects. It's the least amount of
       | effort to maintain machine-checked specifications that our
       | software does what we think it does. It's not great. In other
       | contexts it's not even sufficient. But it's often the only form
       | of verification you can get away with.
       | 
       | Often startups will combine testing strategies and that's
       | usually, "good enough." This tends to result in the testing
       | pyramid some might be familiar with: many unit tests at the
       | bottom, a good amount of integration tests in the middle, and
       | some end-to-end tests and acceptance tests at the top.
       | 
       | However the problem with TDD is that I often find it
       | insufficient. As the article alludes to there are plenty of cases
       | where property based testing has stronger guarantees towards
       | correctness and can prevent a great deal more errors by stating
       | properties about our software that must be true in order for our
       | system to be correct: queued items must always be ordered
       | appropriately, state must be fully re-entrant, algorithms must be
       | lock-free: these things are extremely hard to prove with
       | examples: you need property tests at a minimum.
       | 
       | The difficulty with this is that the skills used to think about
       | _correctness_ , _reliability_ , and _performance_ require a
       | certain level of mathematical sophistication that is not
       | introduced into most commercial /industrial programming pedagogy.
       | Teaching programmers how to think about what it would mean to
       | specify that a program is correct is a broad and deep topic that
       | isn't very popular. Most people are satisfied with "works for
       | me."
       | 
       | In the end I tend to agree that it takes a portfolio of
       | techniques and the wisdom to see the context you're working in to
       | choose the appropriate techniques that are sufficient for your
       | goals. If you're working at a startup where the consequences are
       | pretty low it's unlikely you're going to be using proof repair
       | techniques. However if you're working at a security company and
       | are providing a verified computing base: this will be your bread-
       | and-butter. Unit tests alone would be insufficient.
        
       | bonestamp2 wrote:
       | We follow what we call TID (Test Informed Development).
       | 
       | Basically, we know that we're going to have to write tests when
       | we're done, so we are sure to develop it in a way that is going
       | to be (relatively) easy to write accurate and comprehensive tests
       | for.
        
       | woeirua wrote:
       | TDD is great for some types of code, where the code is mostly
       | self-contained with few external dependencies and the expected
       | inputs and outputs are well defined and known ahead of time.
       | 
       | TDD is miserable for code that is dependent on data or external
       | resources (especially stateful resources). In most cases, writing
       | "integration" tests feels like its not worth the effort given all
       | the code that goes into managing those external resources. Yes, I
       | know about mocking. But mocking frameworks are: 1 - not trivial
       | to use correctly, and 2 - often don't implement all the
       | functionality you may need to mock.
        
         | evouga wrote:
         | I completely agree. I'll use TDD when implementing a function
         | "where the code is mostly self-contained with few external
         | dependencies and the expected inputs and outputs are well
         | defined and known ahead of time" _and_ where the function is
         | complex enough that I 'm uncertain about its correctness.
         | Though I find I usually do property testing, or comparison to a
         | baseline on random inputs, similar to the quicksort example in
         | the blog post (against a slow, naive implementation of the
         | function; or an older version of the function, if I'm
         | refactoring) rather than straight TDD.
         | 
         | When debugging, I'll also turn failure cases into unit tests
         | and add them to the CI. The cost to write the test has already
         | been paid in this case, so using them to catch regressions is
         | all-upside.
         | 
         | System tests are harder to do (since they require reasoning
         | about the entire program rather than single functions) but in
         | my experience are the most productive, in terms of catching the
         | most bugs in least time. Certainly every minute spent writing a
         | framework for mocking inputs into unit tests should probably
         | have been spent on system testing instead.
        
         | zoomablemind wrote:
         | >...TDD is great for some types of code, where the code is
         | mostly self-contained with few external dependencies and the
         | expected inputs and outputs are well defined and known ahead of
         | time.
         | 
         | I find that TDD is very well fit to fix the expectations from
         | the external dependencies.
         | 
         | Of course, when such dependency is extensive, like an API
         | wrapper, then writing equally extensive tests would be
         | redundant. Even then, the core aspects of the external
         | dependencies should be fixed testably.
         | 
         | Testing is a balance game, even with TDD. The goal is to
         | increase certainty under dynamic changes and increasing
         | complexity.
        
       | ChrisMarshallNY wrote:
       | I find some of the _techniques_ espoused by TDD proponents to be
       | quite useful.
       | 
       | In _some_ of my projects.
       | 
       | Like any technique, it's not dogma; just another tool.
       | 
       | One of my biggest issues with "pure" TDD, is the requirement to
       | have a very well-developed upfront spec; which is actually a good
       | thing.
       | 
       |  _sometimes_.
       | 
       | I like to take an "evolutionary" approach to design and
       | implementation[0], and "pure" TDD isn't particularly helpful,
       | here.
       | 
       | [0] https://littlegreenviper.com/miscellany/evolutionary-
       | design-...
       | 
       | Also, I do a lot of GUI and device interface stuff. Unit tests
       | tend to be a problem, in these types of scenarios (no, "UI unit
       | testing" is not a solution I like). That's why I often prefer
       | test harnesses[1]. My testing code generally dwarfs my
       | implementation code.
       | 
       | [1] https://littlegreenviper.com/miscellany/testing-harness-
       | vs-u...
       | 
       | Here's a story on how I ran into an issue, early on[2].
       | 
       | [2] https://littlegreenviper.com/miscellany/concrete-
       | galoshes/#s...
        
         | twic wrote:
         | > One of my biggest issues with "pure" TDD, is the requirement
         | to have a very well-developed upfront spec; which is actually a
         | good thing.
         | 
         | Can you expand on what you mean by "a very well-developed
         | upfront spec"? Because that doesn't sound at all like TDD as i
         | know it.
         | 
         | I work on software that takes in prices for financial
         | instruments and does calcualtions with them. initially there
         | was one input price for everything. A while ago, a requirement
         | came up to take in price quotes from multiple authorities,
         | create a consensus, and use that. I had a chat with some expert
         | colleagues about how we could do that, so i had a rough idea of
         | what we needed. Nothing written down.
         | 
         | I created an empty PriceQuoteCombiner class. Then an empty
         | PriceQuoteCombinerTest class. Then i thought, "well, what is
         | the first thing it needs to do?". And decided "if we get a
         | price from one authority, we should just use that". So i wrote
         | a test that expressed that. Then made it pass. Then thought
         | "well, what is the next thing it should do?". And so on and so
         | forth. And today, it has tests for one authority, multiple
         | authorities, no authorities, multiple authorities but then one
         | sends bad data, multiple authorities and one has a suspicious
         | jump in its price which might be correct, might not, and many
         | more cases.
         | 
         | The only point at which i had anything resembling a well-
         | developed upfront spec was when i had written a test, and that
         | was only an upfront spec for the 1-100 lines of implementation
         | code i was about to write.
         | 
         | So your mention of "a very well-developed upfront spec" makes
         | me wonder if you weren't actually doing TDD.
         | 
         | No argument about testing user interfaces, though. There is no
         | really good solution to that, as far as i know.
        
           | ChrisMarshallNY wrote:
           | You still had an "upfront" spec. You just didn't write it
           | down, and applied it in a linear fashion. I assume that you
           | are quite experienced and good, so it was a good way to do
           | things.
           | 
           | I tend to be _highly_ iterative at the design, and even
           | _requirements_ level, in my work. Not for the faint of heart.
           | I often toss out _tons_ of code. I literally have no idea how
           | something will work, until I've beat on implementation (not
           | prototype) code, rewritten it a couple of times, and maybe
           | refactored it in-place. Then, I refine it.
           | 
           | I seldom write down a thing (concrete galoshes). If I do,
           | it's a napkin sketch that can be binned after a short time,
           | because it no longer resembles reality.
           | 
           | I'm a big believer in integration testing, as early as
           | possible. The way I do things, makes that happen.
           | 
           | It's the way I work. But an important factor, is that I tend
           | to work alone. My scope is, by necessity, constrained, but I
           | get more done, on my own, than many teams get done, and a lot
           | faster, higher-Quality, and better-documented. I also write
           | the kind of software that benefits from, and affords, my
           | development style. If I was writing engine code, my way would
           | be considered quite reckless (but maybe not _that_ reckless.
           | I used my "evolutionary design" process on my BAOBAB server
           | -which I outline in [0], above, and that server has been the
           | engine for the app I've been refining for the last couple of
           | years. The process works great. It's just a lot more pedantic
           | and disciplined).
           | 
           | If I work on a team, then the rules are different. I believe
           | in whitebox interface, and blackbox implementation. That's a
           | great place to have TDD.
           | 
           |  _> makes me wonder if you weren 't actually doing TDD._
           | 
           | I get that kind of thing a lot. I fail most litmus tests.
           | It's a thing...
           | 
           | Lots of folks here, would defecate masonry at the way I work.
           | 
           | I don't have to please anyone during the process. They just
           | see the end result; which is generally stellar.
           | 
           | Since one thing that geeks _love_ to do, is sneer at other
           | geeks, I am sort of doing a public service. No need to thank
           | me. The work is its own reward.
        
       | _greim_ wrote:
       | I've chosen to interpret TDD as "test driven design", based on
       | the idea that systems designed to be easily unit-testable tend to
       | also be easier to understand, maintain, extend, compose,
       | repurpose, refactor, etc.
       | 
       | I deviate from some proponents in that I think this kind of TDD
       | can be done while writing zero unit tests, but in practice they
       | keep you sensitized to good design techniques. Plus the tests do
       | occasionally catch bugs, and are otherwise a good forum to
       | exercise your types and illustrate your code in action.
        
       | sirsinsalot wrote:
       | There's a lot of conflating unit-testing/TDD and QA here.
       | 
       | Yes, when you start writing code, it may not be well defined, or
       | you (the coder) may not understand the requirement as intended.
       | That's OK.
       | 
       | Write your test. Make clear your assumptions and write the code
       | against that. Now your code is easier to refactor and acts as
       | living documentation of how you understood the requirement. It
       | also acts to help other engineers not break your code when they
       | "improve" it.
       | 
       | If QA, the client or God himself decides the code needs to change
       | later, for whatever reason, well that's OK too.
        
       | 0xbadcafebee wrote:
       | Why does TDD exist?
       | 
       | 1. We want a useful target for our software. You could design a
       | graphical mock-up of software and design your software to fit it.
       | Or you could create a diagram (or several). Or you could create a
       | piece of software (a test) which explains how the software is
       | supposed to work and demonstrates it.
       | 
       | 2. When we modify software over time, the software eventually has
       | regressions, bugs, design changes, etc. These problems are
       | natural and unavoidable. If we write tests before merging code,
       | we catch these problems quickly and early. Catching problems
       | early reduces cost and time and increases quality. (This concept
       | has been studied thoroughly, is at the root of practices such as
       | Toyota Production System, and is now called _Shift Left_ )
       | 
       | 3. It's easy to over-design something, and hard to design it
       | "only as much as needed". By writing a simple test, and then
       | writing only enough code to pass the test, we can force ourselves
       | to write simpler code in smaller deliverable units. This helps
       | deliver value quicker by only providing what is needed and no
       | more.
       | 
       | 4. Other reasons that are "in the weeds" of software design, and
       | can be carefully avoided or left alone if desired. Depends on if
       | you're building a bicycle, a car, or a spaceship. :-)
       | 
       | But as in all things, the devil's in the details. It's easy to
       | run into problems following this method. It's also easy to run
       | into problems _not_ following this method. If you use it, you
       | will probably screw up for a while, until you find your own way
       | of making it work. You shouldn 't use it for everything, and you
       | should use good judgement in how to do it.
       | 
       | This is an example of software being more craft than science. Not
       | every craftsperson develops the same object with the same
       | methods, and that's fine. Just because you use ceramic to make a
       | mug, and another person uses glass, doesn't mean one or the other
       | method is bad. And you can even make something _with both_. Try
       | to keep an open mind; even if you don 't find them productive,
       | others do.
        
       | rodrigosetti wrote:
       | TDD doesn't work for the really interesting problems: you can't
       | achieve a deep creative solution through small mechanical
       | improvements
        
       | yuan43 wrote:
       | > ... I practice "weak TDD", which just means "writing tests
       | before code, in short feedback cycles". This is sometimes
       | derogatively referred to as "test-first". Strong TDD follows a
       | much stricter "red-green-refactor" cycle:
       | 
       | > 1. Write a minimal failing test.
       | 
       | > 2. Write the minimum code possible to pass the test.
       | 
       | > 3. Refactor everything without introducing new behavior.
       | 
       | > The emphasis is on minimality. In its purest form we have Kent
       | Beck's test && commit || reset (TCR): if the minimal code doesn't
       | pass, erase all changes and start over.
       | 
       | An example would be helpful here. In fact, there's only a single
       | example in the entire article. That's part of the problem with
       | TDD and criticisms of it. General discussions leave too much to
       | the imagination and biases from past experience.
       | 
       | Give me an example (pick any language - it doesn't matter), and
       | now we can talk about something interesting. You have a much
       | better chance of changing my mind and I have a much better chance
       | of changing yours.
       | 
       | The example in the article (quick sort) is interesting, but it's
       | not clear how it would apply to different kinds of functions. The
       | author uses "property testing" to assert that a sorted list's
       | members are of ascending value. The author contrasts this with
       | the alleged TDD approach of picking specific lists with specific
       | features. It's not clear how this approach would translate to a
       | different kind of function (say, a boolean result). Nor is it
       | clear what the actual difference is because in both cases
       | specific lists are being chosen.
        
         | tikhonj wrote:
         | There was an example of what it means to write minimal code to
         | pass a test with QuickCheck which illustrates pretty much
         | exactly the stuff you quoted.
        
       | 3pt14159 wrote:
       | This has been rehashed a million times.
       | 
       | My view is that TDD is great for non-explorative coding. So data
       | science -> way less TDD. Web APIs -> almost always TDD.
       | 
       | That said, one of the things I think the vast majority of the
       | leans anti-TDD crowd misses is that someone else on the team is
       | picking up the slack for you and you never really appreciated it.
       | I've joined too many teams, even great ones, where I needed to
       | make a change to an endpoint and there were no functional or
       | integration tests against it. So now _I 'm_ the one that is
       | writing the tests _you_ should have written. _I 'm_ the one that
       | has to figure out how the code should work, and _I 'm_ the one
       | that puts it all together in a new test for all of your existing
       | functionality before I can even get started.
       | 
       | Had you written them in the first place I would have had a nice
       | integration test that documents the intended behaviour and guards
       | against regressions.
       | 
       | Basically I'm carrying water for you and the rest of the team
       | that has little to do with my feature.
       | 
       | Now there are some devs out there that don't need TDD to remember
       | to write tests, but I don't know many of them and they're usually
       | writing really weird stuff (high performance or video or
       | whatever).
       | 
       | But I have stopped concerning myself with changing other peoples
       | minds on this. Some people have just naturally reactive minds and
       | TDD isn't what they like so they don't do it.
        
       | jmconfuzeus wrote:
       | I noticed that proponents of TDD are mostly consultants who sell
       | TDD courses or seminars.
       | 
       | You rarely see someone who writes production code preach TDD.
       | 
       | Something fishy there...
        
         | righttoolforjob wrote:
         | This is true. There are engineers who practice TDD as well,
         | although quite few in my experience. The code I've seen come
         | out from hardcore TDD is utter crap, because what truly matters
         | is a good design. In fact writing tests first by design
         | produces crappily designed and messy code and by extension
         | crappily designed tests. Hence you end up with crap all over.
        
         | ajkjk wrote:
         | No way, this is false. Tons of engineers preach TDD.
        
         | salawat wrote:
         | People who write production code generally have QA teams they
         | yeet the testing burden to.
         | 
         | Said teams absorb a lot of the pain of Soft Devs who don't
         | bother even running their own code.
        
           | joshstrange wrote:
           | That's fair but let's not pretend QA is even in the same
           | ballpark as tests, QA is about a billion times more useful.
           | First and foremost because they didn't write the code.
           | 
           | I believe strongly that developers make horrible tests
           | because it's a completely different mindset that is not easy
           | to switch between and near impossible if you wrote the code
           | yourself that you are testing. We tend to lean into the
           | "happy path" and don't even consider "outlandish" things a
           | user might do. I have immense respect for a good QA person
           | and I've had the privilege of working with a number of them,
           | some I'd hire in a heartbeat if it were up to me.
           | 
           | I'm not 100% anti-testing but the only testing I've seen
           | produce real results was automated browser testing, and that
           | was only after 2 very large attempts by the development team
           | to do it. Finally we brought in someone whose sole job was
           | the automated browser testing suite. In a very short amount
           | of time he had something working that was producing useful
           | results, something our 2 previous attempts never did. I
           | believe it was in part since he didn't work with or write any
           | of the code, he had no preconceptions, he didn't "think" the
           | way we did since we knew the inner workings. From this and
           | other experiences I think QA and Dev should be 2 seperate
           | groups/teams without overlap (don't have devs do QA, they
           | don't like doing it and they just aren't good at it).
        
           | xtracto wrote:
           | The "throw shit at the wall and see what sticks" programming
           | technique.
           | 
           | Several years ago I was part of a team of developers in an
           | org that did not have a formal QA process. Code was "just OK"
           | but we shipped working products. At some point we (the
           | management) decided to add a QA step with formal QA Engineers
           | (doing part QA automation and manual QA). As a result
           | Engineers became sloppy and realized they could get "extra
           | time" if they delivered half-assed code that had bugs that
           | were caught by QA. That was painful.
        
       | MattPalmer1086 wrote:
       | I once tried to write something using a pure TDD approach. It was
       | enlightening.
       | 
       | Pluses were refactoring was easy and I had confidence that the
       | system would work well at the end.
       | 
       | Minuses were it took a _lot_ longer to write and I had to throw
       | away a lot of code and tests as my understanding increased. It
       | slowed down exploration immensely. Also, factoring the code to be
       | completely testable led to some dubious design decisions that I
       | wouldn 't make if I wasn't following a pure TDD approach.
       | 
       | On balance I decided it wasn't a generally good way to write
       | code, although I guess there may be some circumstances it works
       | well for.
        
       | daviding wrote:
       | The 'T' in TDD stands for design. :) The name of it has always
       | hurt the concept I think.
       | 
       | In my experience TDD uptake and understanding suffers because a
       | lot of developers are in a context of using an existing
       | framework, and that framework sort of fights against the TDD
       | concepts sometimes. Getting around that with things like
       | dependency injections, reversals etc then gets into the weeds and
       | all sorts of 'Why am I doing this' pain.
       | 
       | Put another way, a lot of commercial development isn't the nice
       | green-field coding katas freedom, it's spelunking through 'Why
       | did ActiveRecord give me that?' or 'Why isn't the DOM refreshing
       | now?'. Any friction then gets interpreted as something wrong
       | about TDD and the flow gets stopped.
        
       | pkrumins wrote:
       | My advice is to follow the famous quote "given enough eyeballs
       | all bugs are shallow". Add a "send feedback" link in your
       | application and let your users quickly and easily notify you when
       | something goes wrong. My product has several million users and
       | has zero tests and when bugs get pushed to production, users tell
       | me in seconds. Sometimes pushing bugs to production is part of my
       | workflow and then quickly fixing them allows me to iterate at
       | record speeds.
        
       | SkyMarshal wrote:
       | TDD is just a huge ugly kluge to compensate for languages
       | designed with inadequate internal correctness guarantees. So
       | instead we have to tack on a huge infrastructure of external
       | correctness guarantees instead. TDD is an ad-hoc, informally-
       | specified, bug-ridden, slow implementation of half of a strong
       | type system.
        
         | Jtsummers wrote:
         | Unless your language includes a proper proof system for the
         | entire program logic (see Idris or SPARK/Ada for something
         | close to this, though in the latter it can only work with a
         | subset of the overall Ada language), you _will_ need tests.
         | Even in languages like Haskell, Rust, and Ada which have very
         | good and expressive type systems tests are helpful for
         | validating the actual logic of the system.
        
           | AnthonBerg wrote:
           | Agreed!; Personally, I am of the opinion that Idris for one
           | is mature enough that there is no need to forego tools that
           | have a proper proof system for the entire program. It's
           | feasible today.
           | 
           | Idris is carefully and purposefully described by its creators
           | as _not_ production ready. Nonetheless, because of what Idris
           | _is_ , it's arguably _more_ production-ready than languages
           | which don't even attempt formal soundness to anywhere near
           | the same degree. In other words: Idris is not a complete
           | Idris. But! All the other languages are _even less complete
           | Idrises!_
           | 
           | Big old "personal opinion" disclaimer here though. -Let's
           | prove it's not possible to use Idris by doing it! Shall we?
        
           | SkyMarshal wrote:
           | Yes that's true, and imho the objective should be to move as
           | much of TDD as possible into the type system. Despite my OP
           | maybe implying it's binary, it's not, and getting closer to
           | that objective is just as worthy as getting all the way
           | there. It's still a hard problem and getting all the way
           | there will take years or decades more experience,
           | experimentation, research, and learning.
        
           | haspok wrote:
           | Yes, you will need tests, but do you need 1. TDD? 2. Unit
           | tests?
           | 
           | I agree with Jim Coplien when he argues that most unit
           | testing is waste. And TDD is even worse, because it is
           | equivalent to designing a complex system purely from the
           | bottom up, in miniature steps.
        
             | Jtsummers wrote:
             | > And TDD is even worse, because it is equivalent to
             | _designing_ a complex system purely from the bottom up, in
             | miniature steps. [emphasis added]
             | 
             | What fool uses TDD to _design_? The second  "D" is
             | "Development". If people want to act foolishly, let them.
             | Then come in later and make money cleaning up their mess.
        
               | MattPalmer1086 wrote:
               | Better design is one of the supposed benefits of TDD. The
               | article nicely demolishes that view, and I agree fully
               | with what it says.
               | 
               | There _is_ a small scale design benefit to writing tests,
               | and that is simply that you always have a  "user" of your
               | code, even if it's only focused on tiny bits of it.
               | 
               | But having said that, I get essentially the same design
               | benefit from writing tests afterwards, or writing a test
               | client, or writing user documentation. I usually discover
               | code needs some design improvement once I have to explain
               | or use it.
        
               | Jtsummers wrote:
               | It leads to better design (is the theory), but it is not
               | itself a design process. It's a development process. TDD
               | doesn't replace the need to stop, think, and consider
               | your design.
        
           | SomeCallMeTim wrote:
           | Needing tests != TDD.
           | 
           | Needing tests != Unit Tests.
           | 
           | Adding larger system tests after the fact is perfectly
           | reasonable. TDD wants you to write tiny tests for every
           | square millimeter of functionality. It's just not worth it,
           | and 99% of the value is to make up for shortcomings in
           | dynamic languages.
        
       | moomoo11 wrote:
       | I like TDD. It's just a tool on our tech belt. If done right
       | (takes practice and open mind tbh) the major benefit is you have
       | code that is single responsibility and easy to understand,
       | isolate, or modify.
       | 
       | We have so many things on our tech belt, like clean architecture
       | or x pattern. This is just another tool, and I think it helps
       | especially in building complex software.
       | 
       | Just be practical and don't try to be "the 100%er" who is super
       | rigid about things. Go into everything with a 80/20 mindset. If
       | this is something mission critical and needs to be as dependable
       | as possible, then use the tools best suited for it. If you're
       | literally putting buttons on the screen which Product is going to
       | scrap in two weeks, maybe use TDD for the code responsible for
       | dynamically switching code based on Product mindset that week.
        
       | bndr wrote:
       | There are three things in my opinion that speak against going
       | with TDD:
       | 
       | 1. Many companies are agile, and the requirements constantly
       | change, which makes implementing TDD even harder.
       | 
       | 2. TDD does not bring enough value to justify the investment of
       | time (for writing & maintaining the test suites), the benefits
       | are negligible, and the changes are often.
       | 
       | 3. Everything is subjective [1], and there's no reason to have
       | such strongly held opinions about the "only right way to write
       | code" when people write software in a way that is efficient for
       | their companies.
       | 
       | [1] https://vadimkravcenko.com/shorts/software-development-
       | subje...
        
       | Joker_vD wrote:
       | The fact that some people really argue that TDD produce _better_
       | designs... sigh. Here, look at this [0] implementation of
       | Dijkstra 's algorithm, written by Uncle Bob himself. If you think
       | _that_ is well-designed (have you ever seen weighted graphs
       | represented like this?) then, well, I guess nothing will ever
       | sway your opinion on TDD. And mind you, this is a task that does
       | have what a top comment in this very thread calls a  "well
       | defined spec".
       | 
       | [0] https://blog.cleancoder.com/uncle-
       | bob/2016/10/26/DijkstrasAl...
        
         | jonstewart wrote:
         | In my personal experience, TDD helps me produce better designs.
         | But, _thinking_ also helps me produce better designs, too.
         | There 's a lot of documentation that Creepy Uncle Bob isn't the
         | most thoughtful person, and I think this blog post says much
         | more about him than about TDD.
         | 
         | The code is definitely a horrow show.
        
         | rmetzler wrote:
         | Can you link to an implementation you would consider great?
         | 
         | I would just like to compare them. I too find Uncle Bobs "clean
         | code" book very much overrated.
         | 
         | My understanding of the "design" aspect of TDD is, that you
         | start from client code and create the code that conforms to
         | your tests. Too often I worked in a team with other developers
         | and I wanted to use what they wrote, and they somehow coded
         | what was part of the spec, but it was unusable from my code.
         | Only because I was able to change their code (most often the
         | public API) I was able to use it.
        
           | whimsicalism wrote:
           | It stores it as a collection of edges? Why not use adjacency
           | list representation?
           | 
           | You iterate through all of the edges every time to find a
           | nodes neighbors?
           | 
           | idk, this code just looks terrible to me.
        
             | sdevonoes wrote:
             | But TDD (the main topic being discussed here) has nothing
             | to do with that, right? I mean, how on earth is TDD going
             | to help you decide between a) using a simple data structure
             | like a collection and b) a more sophisticated data
             | structure like the adjacency list, if you have no idea what
             | an adjacency list is?
        
               | whimsicalism wrote:
               | Yeah I was only commenting on what was being discussed in
               | this particular subthread about whether this was good
               | code/design.
        
         | codeflo wrote:
         | What the actual fuck... I only got two pages down and already
         | found several red flags that I would _never_ accept in any code
         | review. Not the least of which is that when querying an
         | edgeless graph for the shortest path from node A to node Z,
         | "the empty path of length 0" is the exact opposite of a correct
         | answer.
         | 
         | So thanks for the link, I guess. I'll keep this as ammunition
         | for the next time someone quotes Uncle Bob.
        
           | sushisource wrote:
           | Damn, indeed. The Uncle Bob people (or, really, any "this
           | book/blog post/whatever says to do technique x" people) are
           | my absolute least favorite. This is a good riposte. Or,
           | alternatively, if they don't understand why it's bad then you
           | know they're a shit coder.
        
       | joshstrange wrote:
       | I'm not anti-TDD necessarily but I've yet to see tests yield
       | useful results at almost everyone company I've worked it. It
       | could be I've just never worked with someone who was actually
       | good at tests.
       | 
       | Tests in general aren't something I regularly use and a lot of
       | TDD feels somewhat insane to me. You can write all the tests you
       | want ahead of time but until the rubber meet the road it's a lot
       | of wishful thinking in my experience. Also it makes refactoring
       | hell since you often have to rewrite all the tests except the
       | ones at the top level and sometimes even those if you change
       | enough.
       | 
       | I believe tests can work, I've just never really seen them work
       | well expect for very well defined sets of functionality that are
       | core to a product. For example I worked at a company that had
       | tests around their geofencing code. Due to backfilling data,
       | zones being turned on/off by time, exception zones within zones,
       | and locations not always being super accurate, the test suite was
       | impressive. Something like 16 different use cases it tested for
       | (to determine if a person was in violation for a given set of
       | locations, for a given time). However, at the same company, there
       | was a huge push to get 80%+ code coverage. So many of our tests
       | were brittle that we ended up shipping code regularly with broken
       | tests because we knew they couldn't be trusted. The tests that
       | were less brittle often had complicated code to generate the test
       | data and the test expectations (who tests the tests?). In my
       | entire time at that company we very rarely (I want to say "never"
       | but my memory could be wrong) had a test break that actually was
       | pointing at a real issue, instead the test was just brittle or
       | the function changed and someone forgot to update the test. If
       | you have to update the test every time you touch the code it's
       | testing... well I don't find that super useful, especially
       | coupled with it never catching real bugs.
       | 
       | In a lot of TDD/Tests in general tutorials I've seen they make it
       | seem all roses and sunshine but their examples are simple and
       | look nothing like code I've seen in the wild. I'd be interested
       | in some real-world code and the tests as the evolved over time.
       | 
       | All that said, I continue to be at least interested in tests/TDD
       | in the hopes one day it will "click" for me and not see just like
       | a huge waste of time.
        
       | lynndotpy wrote:
       | IMO TDD should be by opportunity and not policy. That solves
       | basically all the problems I have with it.
       | 
       | TDD is great because it forces you to concretize and challenge
       | assumptions, and provides a library of examples for new devs to a
       | codebase.
        
         | righttoolforjob wrote:
         | You are arguing for having tests and good coverage, not for
         | doing TDD.
        
           | lynndotpy wrote:
           | No, I am arguing for TDD, specifically, writing tests before
           | code. It feels like a superpower when it works. Maybe that's
           | for a whole program or only small parts of it.
        
       | sedachv wrote:
       | TDD use would be a lot different if people actually bothered to
       | read the entirety of Kent Beck's _Test Driven Development: By
       | Example_. It's a lot to ask, because it is such a terribly
       | written book, but there is one particular sentence where Beck
       | gives it away:
       | 
       | > This has happened to me several times while writing this book.
       | I would get the code a bit twisted. "But I have to finish the
       | book. The children are starving, and the bill collectors are
       | pounding on the door."
       | 
       | Instead of realizing that Kent Beck stretched out an article-
       | sized idea into an entire book, because he makes his money
       | writing vague books on vague "methodology" that are really
       | advertising brochures for his corporate training seminars, people
       | actually took the thing seriously and legitimately believed that
       | you (yes, you) should write all code that way.
       | 
       | So a technique that is sometimes useful for refactoring and
       | sometimes useful for writing new code got cargo-culted into a no-
       | exceptions-this-is-how-you-must-do-all-your-work Law by people
       | that don't really understand what they are doing anymore or why.
       | Don't let the TDD zealots ruin TDD.
        
         | loevborg wrote:
         | You have got to be kidding. Beck's book - both TDD: By Example
         | and Extreme Programming - are very well written and have about
         | the highest signal/noise ratio of any programming books.
        
         | evouga wrote:
         | This seems to be the case with a lot of "methodologies" like
         | TDD, Agile, XP, etc. as well as "XXX considered harmful"-style
         | proscriptions.
         | 
         | A simple idea ("hey, I was facing a tricky problem and this new
         | way of approaching it worked for me. Maybe it will help you
         | too?") mutates into a blanket law ("this is the _only_ way to
         | solve _all_ the problems ") and then pointy-haired folks notice
         | the trend and enshrine it into corporate policy.
         | 
         | But Fred Brooks was right: there are no silver bullets. Do what
         | works best for you/your team.
        
       | AtNightWeCode wrote:
       | The productivity rate went through the roof when we ditched TDD.
       | TDD has a bit of the same problem as strict DDD. You spend a lot
       | of time making upfront decisions about things that does not
       | really matter or you don't know about.
       | 
       | I see unit tests as a tool to be used where it makes sense and I
       | use it a lot. It is true that testable code is better.
       | Testability should be a factor when selecting tech.
        
         | righttoolforjob wrote:
         | I agree with your first sentence, but TDD and unit tests are
         | completely diametrical concerns.
         | 
         | Unit tests serve multiple purposes. Number 1 is to have a way
         | for you to play around with your design. Then it also can
         | document requirements. I then lastly serves as a vehicle for
         | you to prove something about your design, typically the
         | fulfillment of a requirement, or the handling of some edge
         | case, etc. This last part is what people unfortunately mostly
         | refer to as a test.
         | 
         | TDD says that you should write your tests before you even have
         | a design, one-by-one typically, adding in more functionality
         | and design as you go. You will end up with crap code. If you do
         | not throw the first iteration away then you will commit crappy
         | code.
         | 
         | Most people naturally find that an iterative cycle of design
         | and test code works the best and trying to sell TDD to them is
         | a harmful activity, because it yields no benefits and might
         | actually be a big step backwards.
        
           | AtNightWeCode wrote:
           | Unit test has at least three different meanings so I think
           | the term should be scrapped. Here I meant basically automated
           | tests.
           | 
           | I worked with TDD and you basically write twice as much code
           | that is four times as complicated and then you stick with
           | poor design choices cause you have to update all the tests as
           | well.
        
       | cannam wrote:
       | This is a good article, with (for me anyway) quite a twist at the
       | end.
       | 
       | The author quotes a tweet expressing amazement that any company
       | might not use TDD, 20 years after it was first popularised - and
       | then writes
       | 
       | "I'd equate it to shell scripting. I spent a lot of time this
       | spring learning shell scripting"
       | 
       | Wow! I feel like the person in the tweet. It's amazing to me that
       | someone could be in a position to write an article with such
       | solid development background without having had shell scripting
       | in their everyday toolbox.
       | 
       | (I use TDD some of the time - I was slow to pick it up and a lot
       | of my older code would have been much better if I had appreciated
       | it back then. I like it very much when I don't really know how
       | the algorithm is going to work yet, or what a good API looks
       | like.)
        
         | NohatCoder wrote:
         | You can use a "real" programming language for anything more
         | complicated than running a program with some parameters.
         | Really, the only thing the various Shell variants have going
         | for them is that you can type it directly into the console. For
         | any lightly complicated programming task they are abysmal
         | languages.
        
           | cannam wrote:
           | Quite right! But approximate experiments and lightweight
           | automation are really useful in deciding where to go and then
           | making sure you stay there. I'm all for test-first, but I'd
           | find it very hard to argue that it's a more important tool
           | than, well, _scripting things_.
        
       | andersonvom wrote:
       | I think people sometimes forget that tests are made of code too.
       | If it's possible to write bad code, it's certainly possible to
       | write bad tests. And writing bad tests first (as in `test-
       | driven`) won't make them any better. At some point, people see
       | bad tests _and_ bad code together and instead of blaming it on
       | the "bad" part, they blame it either on the tests, or on the fact
       | that the tests were written first.
        
       | holoduke wrote:
       | I believe a good way of programming is to always reverse program.
       | So you start with the output and work back to where the algorithm
       | or program starts. In that way you can easily extract an unit
       | test after you finished your task.
        
       | [deleted]
        
       | fleddr wrote:
       | My feelings are far less complicated: TDD is a high-discipline
       | approach to software development, and that's why it doesn't work
       | or doesn't get done.
       | 
       | High-discipline meaning, it entirely depends on highly competent
       | developers (able to produce clean code, deep understanding of
       | programming), rigorously disciplined out of pure intrinsic
       | motivation, and even able to do this under peak pressure.
       | 
       | Which is not at all how most software is built today. Specs are
       | shit so you gradually find out what it needs to do. Most coders
       | are bread programmers and I don't mean that in any insulting way.
       | They barely get by getting anything to work. Most projects are
       | under very high time pressure, shit needs to get delivered and as
       | fast as possible. Code being written in such a way that it's not
       | really testable. We think in 2 week sprints which means anything
       | long term is pretty much ignored.
       | 
       | In such an environment, the shortest path is taken. And since
       | updating your tests is also something you can skip, coverage will
       | sink. Bugs escape the test suite and the belief in the point of
       | TDD crumbles. Like a broken window effect.
       | 
       | My point is not against TDD. It's against ivory tower thinking
       | that does not take into account a typical messy real world
       | situation.
       | 
       | I've noticed a major shift in the last decade. We used to think
       | like this, in TDD, in documenting things with UML, in reasoning
       | about design patterns. It feels like we lost it all, as if it's
       | all totally irrelevant now. The paradigm is now hyper speed.
       | Deliver. Fast. In any way you can.
       | 
       | This short-sighted approach leading to long term catastrophe? Not
       | even that seems to matter anymore, as the thing you're working on
       | has the shelf life of fish. It seems to be business as usual to
       | replace everything in about 3-5 years.
       | 
       | The world is really, really fast now.
        
       | Lapsa wrote:
       | damn it. hoped it's about that train game
        
       | varispeed wrote:
       | What I do is not really pure TDD. I usually don't have a very
       | clear specification of what system needs to be doing (as it is an
       | iterative process). So I write the code and then write tests to
       | see if it gives required outputs for given inputs. Then I write
       | tests to see if it behaves correctly under edge cases. I also
       | pretty much stopped using debuggers because of that. Simply there
       | is no need. I can reproduce an error using the test and then fix
       | the code until it passes it.
        
       | marginalia_nu wrote:
       | I've always sort of thought of TDD a bit of a software
       | development methodology cryptid. At best you get shaky camcorder
       | footage (although on closer investigation it sure looks like
       | Uncle Bob in a gorilla suit).
       | 
       | Lots of shops claim to do TDD, but in practice what they mean is
       | that they sometimes write unit tests. I've literally never
       | encountered it outside of toy examples and small academic
       | exercises.
       | 
       | Where is the software successfully developed according to TDD
       | principles? Surely a superior method of software development
       | should produce abundant examples of superior software? TDD has
       | been around for a pretty long time.
        
         | fsdghrth3 wrote:
         | I use TDD as a tool. I find it quite heavy handed for
         | maintenance of legacy code where I basically know the solution
         | to the task up front. I can either just rely on having enough
         | existing coverage or create one test for my change and fix it
         | all in one step.
         | 
         | The times I actually use TDD are basically limited to really
         | tricky problems I don't know how to solve or break down or when
         | I have a problem with some rough ideas for domain boundaries
         | but I don't quite know where I should draw the lines around
         | things. TDD pulls these out of thin air like magic and they
         | consistently take less time to reach than if I just sit there
         | and think about it for a week by trying different approaches
         | out.
        
         | gnulinux wrote:
         | In my current company, I'm practicing TDD (not religiously, in
         | a reasonable way). What this means for us (for me, my coworkers
         | and my manager):
         | 
         | 1. No bug is ever fixed before we have at least one failing
         | test. Test needs to fail, and then turn green after bugfix. [1]
         | 
         | 2. No new code ever committed without a test specifically
         | testing the behavior expected from the new code. Test needs to
         | fail, and then turn green after the new code.
         | 
         | 3. If we're writing a brand new service/product/program etc, we
         | first create a spec in human language. Turn the spec into
         | tests. This doesn't mean, formally speaking "write tests first,
         | code later" because we do write tests and code at the same.
         | It's just that everything in the spec has to have an
         | accompanying test, and every behavior in the code needs to have
         | a test. This is checked informally.
         | 
         | As they say, unittests are also code, and all code has bugs. In
         | particular, tests have bugs too. So, this framework is not
         | bullet-proof either, but I've personally been enjoying working
         | in this flow.
         | 
         | [1] The only exception is if there is a serious prod incident.
         | Then we fix the bug first. When this happens, I, personally,
         | remove the fix, make sure a test fails, then add the fix back.
        
         | fiddlerwoaroof wrote:
         | I've worked at a place where we did TDD quite a bit. What I
         | discovered was the important part was knowing what makes code
         | easy to test and not the actual TDD methodology.
        
         | twic wrote:
         | I've worked at three companies that did TDD rigorously. It
         | absolutely does exist.
        
       | choeger wrote:
       | The observation about the focus on unit tests is well-made. I
       | think it's a crucial problem that stems from very good tools for
       | unit testing and developers that are very familiar with these
       | tools. It's then very simple to discard anything that isn't
       | covered by these great tools.
       | 
       | But here's an anecdote that explains why you'd always want
       | integration tests (other anecdotes for other test paradigms
       | probably also exist): imagine a modern subway train. That train
       | is highly automated but, for safety reasons, still requires a
       | driver. The train has two important safety features:
       | 
       | 1. The train won't leave a station unless the driver gives their
       | OK. 2. The train won't leave the station unless all doors are
       | closed.
       | 
       | The following happened during testing: The driver gives the OK to
       | leave the station. The train doesn't start because a door is
       | still open. The driver leaves the train and finds one door
       | blocked. After the driver removes the blockage the door closes
       | and the train departs. Now driverless.
       | 
       | I think it's crucial to view integration tests as unit tests on a
       | different level: You need to test services, programs, and
       | subsystems as well as your classes, methods, or modules.
        
       | lifeisstillgood wrote:
       | "The code is the design" conflicts with "TDD".
       | 
       | Write code first. If that code is the v0.1 of the protocol
       | between two blog systems great ! you can do that on a whiteboard
       | and it looks like design when actually it's writing code on a
       | whiteboard.
       | 
       | Now you know what to test so write the test, after writing the
       | code.
       | 
       | Now write the next piece of code.
       | 
       | Do not at any time let a project manager in the room
        
       | sdevonoes wrote:
       | The first time I encounter TDD, I was a bit surprised because it
       | encourages you to write tests (client code) first. Well, I
       | started my professional career without knowing about TDD, but I
       | did know that usually it's best to start writing the client code
       | first. E.g., you write your main.c/Main.java/main.go first,
       | referencing classes/code that do not exist yet, and wiring
       | everything together. Then you move on to the next layer, write
       | code that should exist but still relying on future code that
       | doesn't exist yet. Eventually you end up writing the whole thing.
       | Sometimes the other approach works equally well (e.g., starting
       | from the small blocks and going up).
        
       | ahurmazda wrote:
       | My beef with TDD is most every resource merely parrots the steps
       | (red,green,..). No one teaches it well from what I have found.
       | Nor am I convinced it's easy to teach. I have picked up what I
       | can by watching (what I believe) good TDD practitioners.
       | 
       | I have a feeling this is where TDD loses out the most
        
         | twic wrote:
         | Absolutely. Actually doing TDD is nontrivial, and it has to be
         | learned.
         | 
         | Most of the early learning was by pairing with people who
         | already knew how to do it, working on a codebase using it.
         | People learn it easily and fast that way.
         | 
         | But that doesn't scale, and at some point people started trying
         | to do it having only read about it. It doesn't surprise me at
         | all that that has often been unsuccessful.
        
         | Jtsummers wrote:
         | I mean, there's the actual TDD book by Kent Beck. It's pretty
         | good, and only 240 pages. It was an easy one-week read for me,
         | spread out in the evenings.
        
           | pramodbiligiri wrote:
           | There's TDD by Example, and there's also "Growing Object-
           | Oriented Software, Guided by Tests".
        
         | gnulinux wrote:
         | I practice TDD for the most part, I agree that it's not easy.
         | E.g. there are a lot of unanswered questions: What if you write
         | the test first, red, write the code but it's still red. Could
         | be your code wrong, could be your test wrong. If your test is
         | wrong, do you go back and see the red? (I do). Do you test your
         | tests? (I don't). Treating TDD like a formal system doesn't
         | make any sense since it's meant to be a tool engineer can use
         | as a heuristic to make judgements about the stage of the
         | development.
        
       | sandreas wrote:
       | In my opinion TDD is a good thing, but too demanding and too
       | strict. In real life, there are very different knowledge /
       | experience levels in a development team and if TDD is not applied
       | professionally, it may not help. It just needs a lot of practise
       | and experience.
       | 
       | What it helps with a lot is improving your individual programming
       | skills. So I recommend TDD to everyone, who never did it in
       | practise (best case on a legacy code base) - if not to improve
       | the code itself, then just to LEARN how it could improve your
       | code.
       | 
       | It helped me to understand, why IoC and Dependency Injection are
       | a thing and when to use it. Writing "testable" code is important,
       | while writing real tests may be not as important, as long as you
       | do not plan to have along running project or do a major
       | refactoring. If you ARE planning a major refactoring, you should
       | first write the tests to ensure you don't break anything, though
       | ;)
       | 
       | What I also would recommend is having a CI / Build-Environment
       | supporting TDD, SonarQube and CodeCoverage - not trying to
       | establish that afterwards... Being able to switch to TDD is also
       | a very neat way to get a nice CI setup.
       | 
       | My feeling is, that my programming and deployment skills improved
       | best, when I did one of my personal pet projects strictly test
       | driven with automated CI and found out about the things in TDD
       | and CI, I really need to care about.
        
       | pjmlp wrote:
       | My feelings are quite clear, it just doesn't work besides some
       | simple cases without any kind of GUI (including native ones), or
       | distributed computing algorithms.
       | 
       | It does for nice conference talks though.
        
         | gnulinux wrote:
         | When you say TDD doesn't work do you mean it doesn't work if
         | it's religiously practiced? I've worked for many companies who
         | do TDD and I personally enjoy it very much and we ship code, we
         | make money. So clearly _something_ is not not working. I think
         | the trick with TDD is making sure you don 't use it religiously
         | and understand in what cases it'll help you.
        
           | pjmlp wrote:
           | Prove me wrong designing a good native Windows application
           | according to customer's UI/UX guidelines by religiously
           | following TDD.
           | 
           | Replace Windows by favourite desktop, mobile or console OS.
        
             | gnulinux wrote:
             | I don't do Windows work, I don't do UI/UX and I do not
             | _religiously_ follow TDD. TDD is a tool, just like other
             | tools I know when it applies to a scenario and when it
             | applies I know what it 's helping me with.
        
               | pjmlp wrote:
               | That isn't how it is sold.
               | 
               | It only works for basic use cases and conference talks.
        
               | gnulinux wrote:
               | I'm not trying to sell anything. I'm reporting you an
               | anecdote that I'm a practicing software engineer, I've
               | been professionally writing code for almost a decade and
               | I do use TDD when I write code. I don't care if you do or
               | do not.
        
               | pjmlp wrote:
               | Great for you, for me it is snake oil that quickly shows
               | its weakness when I ask someone to write a fullblow
               | application end to end with TDD, every single aspect of
               | it.
               | 
               | Designing a GUI with test first, doing a game engine with
               | tests first, handling distributed computing algorithms
               | with test first,...
               | 
               | Including the best data structures for handling the set
               | of application requirements.
               | 
               | Yeah, not really.
        
       | gregors wrote:
       | Write the code you wish you had. Define your expectation. Does
       | your coding ability keep up with your expectations? Does that
       | continue to hold for you or anyone else on your team on your
       | worst day?
       | 
       | Don't have any expectations and are exploring? Don't do any of
       | this.
        
       | reggieband wrote:
       | I could write an entire blog post on my opinions on this topic. I
       | continue to be extremely skeptical of TDD. It is sort of infamous
       | but there is the incident where a TDD proponent tries and fails
       | to develop a sudoku solver and keeps failing at it [1].
       | 
       | This kind of situation matches my experience. It was cemented
       | when I worked with a guy who was a zealot about TDD and the whole
       | Clean Code cabal around Uncle Bob. He was also one of the worst
       | programmers I have worked with.
       | 
       | I don't mean to say that whole mindset is necessarily bad. I just
       | found that becoming obsessed with it isn't _sufficient_. I 've
       | worked with guys who have never written a single test yet ship
       | code that does the job, meets performance specs, and runs in
       | production environments with no issues. And I've worked with guys
       | who get on their high horse about TDD but can't ship code on
       | time, or it is too slow, and it has constant issues in
       | production.
       | 
       | No amount of rationalizing about the theoretical benefits can
       | match my experience. I do not believe you can take a bad
       | programmer and make them good by forcing them to adhere to TDD.
       | 
       | 1. https://news.ycombinator.com/item?id=3033446
        
         | commandlinefan wrote:
         | > tries and fails to develop a sudoku solver and keeps failing
         | at it
         | 
         | But that's because he deliberately does it in a stupid way to
         | make TDD look bad, just like the linked article does with his
         | "quicksort test". But that's beside the point - of course a
         | stupid person would write a stupid test, but that same stupid
         | person would write a stupid implementation, too... but at least
         | there would be a test for it.
        
           | evouga wrote:
           | Huh? Ron Jeffries is a champion of TDD (see for instance
           | https://ronjeffries.com/articles/019-01ff/tdd-one-word/). He
           | most certainly _wasn 't_ deliberately implementing Sudoku in
           | a stupid way to make TDD look bad!
        
         | laserlight wrote:
         | Top-most comment to the link you provided pretty much explains
         | the situation. TDD is a software development method, not a
         | generic problem solving method. If one doesn't know how a
         | Sudoku solver works, applying TDD or any other software
         | development method won't help.
        
         | mikkergp wrote:
         | >I've worked with guys who have never written a single test yet
         | ship code that does the job, meets performance specs, and runs
         | in production environments with no issues.
         | 
         | I'm curious to unpack this a bit. I'm curious what other tools
         | people use other than testing programatic testing; programatic
         | testing seems to be the most efficient, especially for a
         | programmer. I'm also maybe a bit stuck on the binary nature of
         | your statement. You know developers who've never let a bug or
         | performance issue enter production(with or without testing)?
        
           | reggieband wrote:
           | Originally when I started out in the gaming industry in the
           | early 2000s. There were close to zero code tests written by
           | developers at that time at the studios I worked for. However,
           | there were large departments of QA, probably in the ratio of
           | 3 testers per developer. There was also an experimental Test
           | Engineer group at one of the companies that did automated
           | testing, but it was closer to automating QA (e.g. test rigs
           | to simulate user input for fuzzing).
           | 
           | The most careful programmers I worked with were obsessive
           | about running their code step by step. One guy I recall put a
           | breakpoint after every single curly brace (C++ code) and
           | ensured he tested every single path in his debugger line by
           | line for a range of expected inputs. At each step he examined
           | the relevant contents of memory and often the generated
           | assembly. It is a slow and methodical approach that I could
           | never keep the patience for. When I asked him about
           | automating this (unit testing I suppose) he told me that
           | understanding the code by manually inspecting it was the
           | benefit to him. Rather than assuming what the code would (or
           | should) do, he manually verified all of his assumptions.
           | 
           | One apocryphal story was from the PS1 days before technical
           | documentation for the device was available. Legend had it
           | that the intrepid young man brought in an oscilloscope to
           | debug and fix an issue.
           | 
           | I did not say that I know any developers who've never let a
           | bug or performance issue enter production. I'm contrasting
           | two extremes among the developers I have worked with for
           | effect. Well written programs and well unit tested programs
           | are orthogonal concepts. You can have one, the other, both or
           | neither. Some people, often in my experience TDD zealots,
           | confuse well unit tested programs with well written programs.
           | If I could have both, I would, but if I could only have one
           | then I'll take the well-written one.
           | 
           | Also, since it probably isn't clear, I am not against unit
           | testing. I am a huge proponent for them, advocating for their
           | introduction alongside code coverage metrics and appropriate
           | PR checks to ensure compliance. I also strongly push for
           | integration testing and load testing when appropriate. But I
           | do not recommend strict TDD, the kind where you do not write
           | a line of code until you first write a failing test. I do not
           | recommend use of this process to drive technical design
           | decisions.
        
           | Chris_Newton wrote:
           | _You know developers who 've never let a bug or performance
           | issue enter production(with or without testing)?_
           | 
           | One of the first jobs I ever had was working in the
           | engineering department of a mobile radio company. They made
           | the kind of equipment you'd install in delivery trucks and
           | taxis, so fleet drivers could stay in touch with their base
           | in the days before modern mobile phone technology existed.
           | 
           | Before being deployed on the production network, every new
           | software release for each level in the hierarchy of Big
           | Equipment was tested in a lab environment with its own very
           | expensive installation of Big Equipment exactly like the
           | stations deployed across the country. Members of the
           | engineering team would make literally every type of call
           | possible using literally every combination of sending and
           | receiving radio authorised for use on the network and if
           | necessary manually examine all kinds of diagnostics and logs
           | at each stage in the hardware chain to verify that the call
           | was proceeding as expected.
           | 
           | It took _months_ to approve a single software release. If any
           | critical faults were found during testing, game over, and
           | round we go again after those faults were fixed.
           | 
           | Failures in that software were, as you can imagine, rather
           | rare. Nothing endears you to a whole engineering team like
           | telling them they need to repeat the last three weeks of
           | tedious manual testing because you screwed up and let a bug
           | through. Nothing endears you to customers like deploying a
           | software update to their local base station that renders
           | every radio within an N mile radius useless. And nothing
           | endears you to an operations team like paging many of them at
           | 2am to come into the office, collect the new software, and go
           | drive halfway across the country in a 1990s era 4x4 in the
           | middle of the night to install that software by hand on every
           | base station in a county.
           | 
           | Automated software testing of the kind we often use today was
           | unheard of in those days, but even if it had been widely
           | used, it still wouldn't have been an acceptable substitute
           | for the comprehensive manual testing prior to going into
           | production. As for how the developers managed to have so few
           | bugs that even reached the comprehensive testing phase, the
           | answer I was given at the time was very simple: the code was
           | extremely systematic in design, extremely heavily
           | instrumented, and subject to frequent peer reviews and
           | walkthroughs/simulations throughout development so that any
           | deviations were caught quickly. Development was of course
           | much slower than it would be with today's methods, but it was
           | so much more reliable in my experience that the two
           | alternatives are barely on the same scale.
        
       | m463 wrote:
       | TDD - test driven development
        
       | GnarfGnarf wrote:
       | I keep wanting to be converted to TDD, but I can't shake the
       | feeling that I'd be writing half the code in twice the time.
        
         | twic wrote:
         | Writing half the code sounds pretty good.
        
         | dbrueck wrote:
         | That's accurate. Worse, as the complexity of the scenario
         | you're trying to test goes up, not only does the cost of
         | creating the test go up, the cost of _maintaining_ it almost
         | always goes up too.
        
         | joshstrange wrote:
         | Yep, I'm pretty anti-test because I've yet to see it ever
         | payoff at any company I've worked at. That said, I keep hoping
         | I'll catch the bug, be converted, have it click in my head. On
         | the surface it seems quite nice but it's always with trivial
         | examples. When you are dealing with real code I've had testing
         | fall apart very quickly and/or make refactors extremely
         | painful. And on top of it all, you are writing twice as much
         | code in a world that doesn't care that you wrote tests, meaning
         | it takes you longer to do that same work
        
       | worik wrote:
       | Testing is very important. Ok.
       | 
       | The problem I have with TDD is the concept of writing tests
       | first. Tests are not specifications (in TDD world the line is
       | blurred.) Tests are confirmation.
       | 
       | I develop my code (I write back end plumbing code for iOS
       | currently) from a test frame work.
       | 
       | My flow:
       | 
       | * Specify. A weak and short specification. putting too much work
       | into the specification is a waste. "The Gizmo record must be
       | imported and decoded from the WHIZZBAZ encoding into a Gizmo
       | object" is plenty of specification.
       | 
       | * Write code for the basic function.
       | 
       | * Write a test for validity of the code (the validity of the
       | record once loaded in the Gizmo/WHIZBAZ case)
       | 
       | But the most important tests are small micro tests (usually
       | asserts) before and after every major section (a tight loop, a
       | network operation, system calls etcetera). More than half my code
       | is that sort of test.
        
       | peteradio wrote:
       | I write tests in order to have something to run and hit
       | breakpoints on while I develop code. Is that TDD? The tests don't
       | even necessarily check anything at the earliest stages, obviously
       | they are red if the code barfs but that's about it. Once the code
       | solidifies I may take some output and persist it to make sure it
       | doesn't change, but "does not crash" is technically a testable
       | endpoint!
        
       | benreesman wrote:
       | I had no idea that people were quite so religious about this sort
       | of thing.
       | 
       | It's pretty clear at this point that testing is one of the most
       | valuable tools in the box for getting sufficiently "correct"
       | software in most domains.
       | 
       | But it's only one tool. Some people would call property checkers
       | like Hypothesis or QuickCheck "testing", some people wouldn't.
       | Either way they are awesome.
       | 
       | Formal methods are also known to be critical in extreme low-
       | defect settings, and seem to be gaining ground more generally,
       | which is a good thing. Richer and richer type systems are going
       | mainstream with like Rust and other things heavily influenced by
       | Haskell and Idris et al.
       | 
       | And then there's good old: "shipping is a feature, and sometimes
       | a more important feature than a low defect count". This is also
       | true in some settings. jwz talk very compellingly about this.
       | 
       | I think it's fine to be religious about certain kinds of
       | correctness-preserving, defect-preventing processes in _domains
       | that call for an extreme posture on defects_. Maybe you work on
       | avionics software or something.
       | 
       | But in general? This "Minimal test case! Red light! Green light!
       | Cast out the unbelievers!" is woo-woo stuff. I had no idea people
       | took this shit seriously.
        
       | rybosworld wrote:
       | Seems like the TLDR is: Well-intentioned patterns break down when
       | taken maximally.
        
         | twic wrote:
         | That's definitely part of it.
         | 
         | The article is really quite good. Much, much better than the
         | discussion here prepared me for!
        
       | JonChesterfield wrote:
       | There's some absolute nonsense in the TDD style. Exposing
       | internal details for test is recommended and bad for non-test
       | users of the interface. Only testing through the interface (kind
       | of the same as above) means tests contort to hit the edge cases
       | or miss them entirely.
       | 
       | The whole interface hazard evaporates if you write the tests in
       | the same scope as the implementation, so the tests can access
       | internals directly without changing the interface. E.g. put them
       | in the same translation unit for C++. Have separate source files
       | only containing API tests as well if you like. Weird that's so
       | unpopular.
       | 
       | There's also a strong synergy with design by contract, especially
       | for data structures. Put (expensive) pre/post and invariants on
       | the methods, then hit the edge cases from unit tests, and fuzz
       | the thing for good measure. You get exactly the public API you
       | want plus great assurance that the structure works, provided you
       | don't change semantics when disabling the contract checks.
        
         | rmetzler wrote:
         | It's similar in Java, where people often only know about public
         | and private, and forget about package scoped functions. You can
         | use these to test utility functions etc.
         | 
         | The post is weird, I agree with almost everything in the first
         | half and disagreed with most of the second part.
         | 
         | What makes TDD hard for integration testing is that there are
         | no simple readymade tools similar to XUnit frameworks and
         | people need to build their own tools and make them fast.
        
       | stuckinhell wrote:
       | TDD is a great example of where major differences between
       | businesses and departments has direct impact on your software
       | engineering.
       | 
       | When business people don't know what they want, do not try TDD.
       | It will be a waste of time. When people do KNOW, or you have a
       | RELIABLE subject matter expert (at a big company you might have
       | one of these), TDD is a lot safer and easier to do.
        
       | mehagar wrote:
       | I think TDD is great in the ideal, but in reality I have only
       | worked on legacy systems where TDD was not practiced from the
       | start. Such systems are hard to fit TDD style tests into because
       | modifying existing code often requires large refactoring to
       | properly inject dependencies and create seams for testing. The
       | catch-22 is that refactoring itself is prone to breaking things
       | without sufficient testing.
       | 
       | As a result, I often try to fit my tests into these existing
       | systems rather than starting with the test and refactor the code
       | under test to fit that shape. The only resource I've seen for
       | dealing with this issue is the advice in the book "Working
       | Effectively with Legacy Code", to write larger system tests first
       | so you can safely refactor the code at a lower level. Still,
       | that's a daunting amount of work when it's ultimately much easier
       | for me to just make the change and move on.
        
       ___________________________________________________________________
       (page generated 2022-08-18 23:00 UTC)