Scaling Humans With the SDLC

It’s well understood that the vast majority of people turn up to work every day wanting to do a great job and be recognised for their efforts in whatever way grants the dopamine hit we all so desire. If this holds true, then we should also expect that all workplaces, large or small, operate like a well oiled machine yet we know this isn’t the case. So what’s the happs?

If we accept that teams follow leadership’s steer, then it follows that if said teams are going off course then surely they are being led off course. Leadership exists to direct and steer their teams towards the outcomes they need to see realised, often using tools such as incentives and metrics to measure progress. Our trusty Key Performance Indicators have entered the building! In and of themselves, metrics are just tools that specify a method to measure progress. They only become nefarious when used in nefarious ways.

The problem I have often seen is that, once we start measuring progress, we lose sight of the outcome we set out to achieve and hyper-focus on the metrics as the signal of success. Maybe a couple of examples will set the scene …

Tools such as SonarQube are often used to gate pull requests and apply metrics such as code coverage (commonly 80%). These are core metrics that measure progress towards resilient, decoupled, high quality software. It often turns out, however, that engineers quickly learn to game the system by testing low-risk logic or writing the bare minimum test cases to get past the tool and the outcome are often the opposite of what we intended.

Another common example occurs when hiring targets are set in order to scale a team to meet growth forecasts. All too often, the focus shifts to the number of people needed rather than the skill levels or composition of the teams we need to build. The outcome we desired was increased engineering capacity yet we asked for increased headcount.

If we look at the incentives set above we can see that the people given them have duly gone forth and tried to achieve them. You could argue that they lacked the context of the incentive but did they? Engineers should know the value of unit tests beyond coverage metrics and recruitment staff absolutely understand the importance of hiring the right people. But these were not the metrics by which they were measured, they were not the metrics raised in reviews and they were not the metrics that determined their bonuses or promotions. Those numbers though, they mattered.

Consider the group of people from “leadership” to the “doers”, their channels of communication, their tools, the way in which they are organised, the method by which they take in new work, etc as a complex system. What we see in the examples above is that changes made to such a system with the best of intentions have resulted in unintended side effects and incorrect outcomes.

In complex software systems, we have your friend and mine the Software Development Lifecycle to help us. It deals with this by applying small, incremental changes and verifying the behaviour of the system as we go to ensure we remain on course. If we distil the steps of the SDLC down to its bare essentials we can apply it to any system:



--- config: themeVariables: fontSize: "24px" --- graph LR; A@{ shape: card, label: "compare current state to desired (requirements)"} B@{ shape: card, label: "hypothesise a path to the desired state (design)"} C@{shape: card, label: "implement the required change (implementation)"}; D@{shape: , label: "measure (monitoring/alerting)"}; A==oB B==oC C==oD D==oA

When we get to that last step its important to understand both what we measure and what we measure against.

What we measure - ensure you assess your trajectory from numerous angles to keep a balanced perspective of your state. This will also help avoid focusing on the metric rather than the outcome. For example: Code coverage + defect + helpdesk call + build fail metrics will give you a broader view of your software quality than any one metric in isolation.

What we measure against - Long term outcomes are the ones that keep you on track so keep razor focused on these. Your progress towards these is what defines on vs off track. Decisions around short term outcomes should also move you towards your long term ones. In software this might be prioritising adherence to architecture over short term delivery goals.

If we iterate over our system in this manner, making small corrections as we go, we might avoid some of those unintended side effects or at least catch them early. By doing so we ensure that the systems our teams operate in create paths of least resistance towards the outcomes we want to realise.

Most of what I’ve said is taking concepts from software development and reimagining them applied to a system that includes humans and the tools they use. If you are aquainted with the works of W Edwards Deming this will be a familiar if reversed approach. Dr. Deming is largely known for lean manufacturing which is the inspiration for DevOps so it stands to reason that if you broaden the SDLC to apply to a human system you come full circle to Dr. Deming’s work more or less. If you’re unfamiliar with Dr. Deming you can find more information about his work at the Deming Institute.

"A bad system will beat a good person, every time."

- Dr W Edwards Deming