If you are a believer in Agile methods, but don't like Test-Driven Development (TDD), this site is for you!

Many people in the Agile community feel that TDD is a niche technique that works really well for some people but is awful for others.

Sometimes TDD fans go too far by insisting that everyone should use TDD - even saying that if you don't use TDD, you are not fully Agile, or have not given it a chance. We disagree. We explain why here.

Be Agile without TDD!

The Definition of Done


Most Agile teams have a "Definition of Done", aka "DOD".

But is that sufficient? A team's DOD applies to the team's stories. What about the end-to-end system?

Also, a typical DOD states that all of a story's tests must pass; but what is the criteria for how thorough the tests must be? For unit tests, perhaps a code coverage is specified; but does that apply to all unit tests? - e.g., user interface level unit tests and microservice unit tests?

And what about behavioral tests? How do you know you are doing enough? Same for integration tests, including end-to-end integration tests. And failure mode tests - aka "resiliency" tests. And performance tests. And every other category of test - what is your criteria for "enough", or "done"?

The DOD needs to be concise, but it should reference a testing strategy. See this article series on how to define a comprehensive Agile testing strategy. Further, there needs to be a program level testing strategy encompassing all products that interact; and there needs to be a product-level testing strategy that defines how integration is ensured and how each epic-level story or feature is tested in an integrated manner, across teams if necessary; and there needs to be a testing strategy for each independently deployable component, whether teams are feature teams or component teams.

Thus, the testing strategy is organized around the things being built - not around the teams. When working on parts of a system, the teams must adhere to and maintain the applicable testing strategies - those strategies are part of the delivery system.

The overall, comprehensive testing strategy for an end-to-end system is a collection of separate strategies, in a hierarchy. And each category of tests in a strategy should have a sufficiency criteria: How do you know you have enough tests? For unit tests, that is usually a code coverage target; but for behavioral tests, you will usually need other criteria, such as stating who must review the tests for completeness.

If you don't have that, then you don't have control of your testing, and you cannot rapidly deliver to production with confidence.

Each testing strategy should be lightweight - a simple table. Otherwise, it is not maintainable, and not understandable, and not usable in an Agile context.

Your DOD should state which categories of tests, from your testing strategy, must pass for a story to be consider to be "done". This includes any existing tests that break as a result of your code changes. That last part is really important, because it ensures that if your code changes break tests, that they must be fixed to have the story be "done", which ensures that the test suite is maintained, and that all Jenkins jobs - including integration test jobs - stay "green".

If your DOD says that integration tests must pass (which it should), then you should integration test before you submit a pull request – not after. Otherwise, integration will be happening for the first time in the Jenkins integration test job, and so that job will be chronically "red" instead of "green". To enable the job to stay green, you must run integration tests locally before they reach the shared integration test job. When they pass locally, then it is time to submit a pull request.

No comments:

Post a Comment