If you are a believer in Agile methods, but don't like Test-Driven Development (TDD), this site is for you!

Many people in the Agile community feel that TDD is a niche technique that works really well for some people but is awful for others.

Sometimes TDD fans go too far by insisting that everyone should use TDD - even saying that if you don't use TDD, you are not fully Agile, or have not given it a chance. We disagree. We explain why here.

Be Agile without TDD!

Product Level PDD

It's About the Product - Not the Team

There is a cancer spreading today among organizations trying to "adopt DevOps": they fully embrace "team autonomy".

This does not work, however. Spotify tried it, and that is what led to their novel structure of "guilds", "tribes", and "chapters". The discovered that if there is not a systematic way of ensuring that teams collaborate, then it will not happen.

One of the ways that too much team autonomy manifests is an inattention to component integration. If each team maintains one or a few components, how do those components integrate? Each component has a "pipeline"; but what about integration?

It is common for a pipeline to include "integration" tests, but all too often those are not actually integration tests, because they either (a) use mocks for other components, or (2) the tests are run in a static test environment that is shared across all the teams.

Using approach #1 does not tell you what you don't already know from your component-level testing. It does not tell you if things actually integrate.

Using approach #2 forces teams to delay integration until they have merged their changes into master for each component. That means that the first time they are testing for integration, they are doing it in a shared test environment, with the tests run by a server (e.g., Jenkins). Have you tried to diagnose problems in that setting? It is a setting in which things are always changing. You can't shell into it. It is awful. It is a return to batch programming: submit your test job and wait for the printout - Jenkins job to finish. It means that your integration test job will be chronically red instead of green.

What you should do is enable teams to run functional integration tests locally, before they do their pull request and before they merge into master. Then, when integration has been tested, do the PR and the merge, and then Jenkins will kick off and the integration test job will come back green. That way, programmers can run and fix and re-run locally, using a red-green cycle and resolving issues ten times as fast.

To set that up, you need to be thinking about the product - not the team.

Epic or Feature Level Stories

It is very important to define stories at a product level. If a story requires changes to many components, and no one team is able to work across all of those components, then you will inevitably have to decompose the product level stories into team level stories that are component-oriented. But at least those tie back to a product level story - a product feature or epic. It is important to define "done" for that level, as well as for the team level component stories. It is the product level stories that matter.

Product level stories should often be "experiments" - that is, trials of new features, with the end user's usage and response to measured and reported on a product level dashboard. The Product Owner should be defining those experiments, and the metrics that will reveal if the feature is well received and is performing well in a business sense. That gives you a customer feedback loop.

The product level stories should have their own test cases. These are essentially use case level test cases, to use an old term. They test whether a feature works in actual usage situations, from user log on to completion of the user's goal. These tests are separate from the ATDD/BDD tests that are written for team level stories. They serve as an extra check on functionality. They can be written by a separate product level team, and run outside of the sprint cycle. Problems found can be added to the product backlog.

Go End-to-End From the Beginning

When using an ATDD process such as PDD, it is important to set up the end to end development/deploy-for-test/test process from the beginning. This is the set of "pipelines" for building and testing components, integration testing them, and checking the product in every other way such as for security, component failure response (resiliency), and performance.

I like to call this "go end-to-end from the beginning". The basic idea is to set up the simplest case of each step, as an orchestrated set of processes, and run a very simple code change through it, with a trivial test case for each type of test. Once you have that, you are ready to start adding code, and you will be able to claim that you are producing "potentially deployable" builds.

See this article for more on this concept.

Assessing Test Sufficiency

It is essential when you define a testing strategy to decide what "enough" is for each category of tests. For unit level tests, one usually specifies a code coverage target for each type of component. What about behavioral tests? It is hard to measure test coverage and so an alternative is to have someone review the behavioral test spec: if the reviewer has the required expertise, that is an independent check on the completeness of the test spec.

See this article series for more on this approach.

Where to Put Product Level Tests

I have seen teams stuff their product level tests into the user interface component, assuming there is one. There is usually a client of some kind - it might be a Lambda service, for example. However, what if a set of microservices are considered to be the product?

As a general pattern (not a rule, however), I advocate creating a separate repo for product-level tests. A product is a collection of components. One can define a different collection, that overlaps some of the same components, and call that another product. Each product needs a set of tests. Thus, it makes sense to define a product level repo and put the product's tests in there.

No comments:

Post a Comment