If you are a believer in Agile methods, but don't like Test-Driven Development (TDD), this site is for you!

Many people in the Agile community feel that TDD is a niche technique that works really well for some people but is awful for others.

Sometimes TDD fans go too far by insisting that everyone should use TDD - even saying that if you don't use TDD, you are not fully Agile, or have not given it a chance. We disagree. We explain why here.

Be Agile without TDD!

An Example

The sample project is a product called pdd-example. It is a Java command program that is triggered by an AWS Lambda. It reads two files from AWS S3, each containing a list of customer IDs. It combines the two sets of IDs, removing duplicates, and writes the combined list to a third file.

To build the product, a backlog of four stories has been defined, here.

Initially a single source code repo is created,
https://gitlab.com/cliffbdf/pdd-example
In the cource of working through the backlog, subcomponents are defined, resulting in the creation of additional repos - one for each subcomponent:
pdd-example-serverles
pdd-example-compa
pdd-example-compb
The main product level repo lists these subcomponent repos. The main repo is used to contain the overview of the product, as well as the product-level integration tests and product level failure mode tests.

When a story is undertaken, the first step is to understand the story’s intent and its acceptance criteria. The second step is to define the algorithms needed to implement the story. These algorithms are maintained in a product level design, which - ideally - is a page in a wiki that supports diagramming, for ease of update and ready access. For this example, I use a Google doc, since that is available on the Internet.

For sprint 1, there are two stories. (All four stories are listed with their acceptance criteria here.) These are,
Story 1.1: As a user, I can provide a file containing user IDs, so that the file can be read.

Story 1.2: As a user, I want to provide a second file that gets read.
To complete story 1, the algorithm might be (in pseudocode),
Algorithm 1:

(Overview: Read a list of account IDs from a file, verifying the syntax of each ID.)

Given command line arguments fileName;

def file : File ← open fileName for reading sequentially;
register file closer to flush and close on program exit.

readFile(file).

function readFile(file : File) {
    for ever, // each line of file
        try to read a line from file;
            if no-more-lines, break from for.
            if any other error, exit with error.
        try,
            def id : ID ← parse a valid ID from line;

       
        if error, log the error and continue
       
   
        with for.
}
 The algorithm should be defined in the evolving design wiki page.

Note that the AWS Lambda and AWS S3 aspects are not mentioned in the algorithm. That is because those aspects are largely implementation details, and so it was not felt that algorithmic specifications were warranted. Nevertheless, they consume a good amount of code, as can be seen in the source code repos. Deciding what merits an algorithmic specification is a matter of judgment.

To complete story 2, the algorithm would be enhanced to perform the same action, but for two different files instead of only one.

In sprint 2, two more stories are added, and these require the algorithm to be enhanced, and some additional supporting algorithms to be defined. The final algorithms are shown in the evolving design document, here, as it would look after sprint 2.

Testing


The design defines the complete product testing approach. This consists of a product-level testing strategy, and a subordinate testing strategy for each subcomponent.

Each testing strategy is merely a table. The table lists the kinds of tests to be run, how those tests will be implemented (e.g., which tool, such as JUnit, Cucumber, Karate, etc. and whether other components are to be mocked), where the tests must be able to be run, and how the project teams will ensure that the testing is thorough enough.

Each component has a “test pipeline”, and there is also a product level integration pipeline.

It is assumed that tests will all be automated unless otherwise specified. It is also highly desirable that a test type not be confined to a single kind of test environment: we need to be able to “shift left” the tests, when it helps the developers to run the tests locally before committing their changes. Thus, while a test pipeline often runs a high coverage regression, it is anticipated that programmers will run enough integration tests locally so that the pipeline runs almost always pass. In other words, programmers do not use the pipelines to see if code changes work: they do that locally, and the pipeline is merely an official and thorough check.

No comments:

Post a Comment