If you are a believer in Agile methods, but don't like Test-Driven Development (TDD), this site is for you!

Many people in the Agile community feel that TDD is a niche technique that works really well for some people but is awful for others.

Sometimes TDD fans go too far by insisting that everyone should use TDD - even saying that if you don't use TDD, you are not fully Agile, or have not given it a chance. We disagree. We explain why here.

Be Agile without TDD!

Pipeline Anti-Patterns

Don't Stop at Component Pipelines

Organizations that try to "adopt DevOps" are quick to set up "pipelines" for their components. The concept of a continuous deployment (CD) "pipeline" is central to DevOps. However, merely implementing pipelines is not too unlike what has been described as "cargo cult" Agile, whereby teams merely copy Agile practices without actually understanding those practices.

The idea of a pipeline is that it is an automated and repeatable sequence of scripted processes. However, a true DevOps process is usually not a pipeline, but a set of pipelines, some of which converge. I explain this in Product Level PDD.

Don't Have a Mandated Pipeline

Another thing that organizations that try to "adopt DevOps" is that they create "standard" pipelines. That is, they create a standard Jenkins "pipeline script", maintained by a "DevOps" group, which teams are all expected to use without change.

That is not unlike the practice of having a separate group create servers for you, or create middleware instances of you, or create Git branches for you. If you have dismantled enterprise practices of doing those things for teams, then why repeat the same approach by having an enterprise silo team create everyone's pipeline for them?

One of the arguments for doing that is that pipelines implement steps that are mandated by "controls", such as security scanning. Another argument is that teams don't know how to create a pipeline on their own. Another argument still is that “we created a pipeline so that the team doesn’t have to worry about all that, and can focus on the code”. That's a step backwards! We want teams to understand "all that"!!! Today's dev teams need to understand the entire continuous delivery process - that is what DevOps is all about!

These arguments for a standard unmodifiable pipeline simply do not hold up to scrutiny. Controls are a poor technique for managing the risks of writing and deploying one's own software. It is far more effective - more flexible and more robust - to manage risk directly. Controls are too rigid for today's dynamic situations in which new tools are appearing all the time.

Regardless, even if certain steps are required in a pipeline, that can be achieved by simply checking to see if those steps are present. One can even automate that. It is not uncommon for organizations to have a deployment process that checks that various steps have been performed, evidenced by the artifacts that they produce.

Having a rigid pre-defined pipeline that is maintained by an enterprise group removes the team from the decisions of what that pipeline needs to do and how it does it. It prevents them from learning, and it prevents them from tailoring the pipeline to their situation, which is almost always very beneficial.

For example, it is common that pre-defined pipelines contain a step for performance testing, whereby the team merely sets a number for the peak load to generate and identifies the curl requests to send. The result of that is that teams will say, "The performance tests got run - check". But do they understand what actually happened? Have they done exploratory performance testing? Have they done failure mode testing to see how things fail? By pre-defining the performance test script, the team stops thinking about it, and it becomes a check-the-box action that is not very effective.


Also, by understanding how their code is integrated, tested, and deployed, teams can shift those activities left, so that they can run the same actions locally, outside of Jenkins - thereby improving their productivity and speed. If you take the process out of their hands, they can’t improve it: it becomes a black box, maintained by an “enterprise team”, just like in the old days: the “pipeline engineering team” becomes the new “Operations”.

Having teams create self-service tools such as a pipeline for dev teams to use is useful; but those tools should be configurable – not locked-down black boxes. In-house tools also need to be transparent (easy to see what it is doing and how) and reliable: it is counter-productive to have brittle in-house tools that cause more problems than they solve. If you can’t make a tool robust and “hard”, then don’t make it at all.

Make Sure Pipeline Steps Can Run Outside the Pipeline

Another of the big mistakes the enterprises that try to "adopt DevOps"make is that they assume that everyone will be using the pipeline for testing.

A pipeline is not for the testing that programmers do. A pipeline serves two purposes: (1) verify that programmers actually did test their code changes, and (2) run tests that programmers simply cannot run locally.

When a programmer merges code into a repo branch that others are pulling from, the code should be either disabled by a feature flag, or the code should be tested so that when the pipeline runs tests against it, the tests will pass.

Jenkins - the pipeline - is not for testing one's code changes! It is for full regression testing, which is not practical to do locally. It is for running tests in a production-like environment - again, something that might be hard to do locally, although with cloud accounts one can often come close to that. Jenkins is also for running performance tests, and doing production deployments that require privilege.

It is not for testing your code changes. To do that, you should deploy what you need locally - all of the upstream and downstream components that you need - and run your tests there. On your laptop, or in a cloud test account that you alone have access to. That's "shift-left" integration testing.

The problem is, when enterprise groups define "standard" pipelines for teams, they often don't know about shift-left testing, and they code the pipeline in a way that prevents any of its stages from being run locally. They prevent teams from being able to do shift-left testing. Teams then have to fall back to running only component level tests locally, which is an obsolete approach and today is an anti-pattern.

Each stage of a Jenkins pipeline script should merely call a shell script (not a Jenkins Groovy script) that runs that stage. That shell script should be runnable in any environment, including locally on one's laptop. It might be necessary to mock some enterprise services to achieve that - things like authentication and authorization - but as little as possible should be mocked. The scripts should be parameterized so that the scripts themselves are environment agnostic, and not tied to Jenkins.

The 12-factor app factor 5 states the importance of having distinct, separable scripts for each step of a pipeline. Merely stringing independent steps together into a Jenkins pipeline does not achieve that: they are still inseparable and cannot be run outside of Jenkins.

Debugging “in Jenkins” is really difficult. It is a return to batch processing from 1980. Make it possible for people to run the pieces of their pipeline in real time. Then they can have a red-green cycle, which is the highest productivity mode for development.

No comments:

Post a Comment