To build the product, a backlog of four stories has been defined, here.
Initially a single source code repo is created,
https://gitlab.com/cliffbdf/pdd-exampleIn the cource of working through the backlog, subcomponents are defined, resulting in the creation of additional repos - one for each subcomponent:
pdd-example-serverlesThe main product level repo lists these subcomponent repos. The main repo is used to contain the overview of the product, as well as the product-level integration tests and product level failure mode tests.
pdd-example-compa
pdd-example-compb
When a story is undertaken, the first step is to understand the story’s intent and its acceptance criteria. The second step is to define the algorithms needed to implement the story. These algorithms are maintained in a product level design, which - ideally - is a page in a wiki that supports diagramming, for ease of update and ready access. For this example, I use a Google doc, since that is available on the Internet.
For sprint 1, there are two stories. (All four stories are listed with their acceptance criteria here.) These are,
Story 1.1: As a user, I can provide a file containing user IDs, so that the file can be read.To complete story 1, the algorithm might be (in pseudocode),
Story 1.2: As a user, I want to provide a second file that gets read.
Algorithm 1:The algorithm should be defined in the evolving design wiki page.
(Overview: Read a list of account IDs from a file, verifying the syntax of each ID.)
Given command line arguments fileName;
def file : File ← open fileName for reading sequentially;
register file closer to flush and close on program exit.
readFile(file).
function readFile(file : File) {
for ever, // each line of file
try to read a line from file;
if no-more-lines, break from for.
if any other error, exit with error.
try,
def id : ID ← parse a valid ID from line;
if error, log the error and continue
with for.
}
Note that the AWS Lambda and AWS S3 aspects are not mentioned in the algorithm. That is because those aspects are largely implementation details, and so it was not felt that algorithmic specifications were warranted. Nevertheless, they consume a good amount of code, as can be seen in the source code repos. Deciding what merits an algorithmic specification is a matter of judgment.
To complete story 2, the algorithm would be enhanced to perform the same action, but for two different files instead of only one.
In sprint 2, two more stories are added, and these require the algorithm to be enhanced, and some additional supporting algorithms to be defined. The final algorithms are shown in the evolving design document, here, as it would look after sprint 2.
Testing
The design defines the complete product testing approach. This consists of a product-level testing strategy, and a subordinate testing strategy for each subcomponent.
Each testing strategy is merely a table. The table lists the kinds of tests to be run, how those tests will be implemented (e.g., which tool, such as JUnit, Cucumber, Karate, etc. and whether other components are to be mocked), where the tests must be able to be run, and how the project teams will ensure that the testing is thorough enough.
Each component has a “test pipeline”, and there is also a product level integration pipeline.
It is assumed that tests will all be automated unless otherwise specified. It is also highly desirable that a test type not be confined to a single kind of test environment: we need to be able to “shift left” the tests, when it helps the developers to run the tests locally before committing their changes. Thus, while a test pipeline often runs a high coverage regression, it is anticipated that programmers will run enough integration tests locally so that the pipeline runs almost always pass. In other words, programmers do not use the pipelines to see if code changes work: they do that locally, and the pipeline is merely an official and thorough check.
No comments:
Post a Comment