Feedback is routinely requested and occasionally considered. Using feedback and doing something with it is nowhere near as routine, unfortunately. Perhaps this has been due to a lack of a practical application based on a focused understanding of feedback loops, and how to leverage them. We’ll look at Feedback Loops, the purposeful design of a system or process to effectively gather and enable data-driven decisions; and behavior based on the feedback collected. We’ll also look at some potential issues and explore various countermeasures to address things like delayed feedback, noisy feedback, cascading feedback, and weak feedback. To do this, in this four-part series we’ll follow newly onboarded associate Alice through her experience with this new organization which needs to accelerate organizational value creation and delivery processes.
As you might remember from those previous articles, “Alice” joined a company, working on a digital product to accelerate delivery. The engineering team was relatively small, about 50 engineers, with three cross-functional teams of 6 engineers, shared services for data, infrastructure, and user acceptance testing (UAT).
Alice knows that code quality and maintainability are important attributes of fast digital delivery. The simple and clean code structure shortens the time to implement a new feature. She knew the ropes thanks to the great books by Robert Martin explaining the concept of clean code. So she asked the engineering teams whether they were addressing findings from Static Code Analysis (SCA) tools that could find code quality issues. Moreover, the engineering teams assured Alice that SCA is an explicit part of the definition of done for every feature.
However, when Alice looked at the SCA report she had a hard time finding a reasonable explanation why there were so many issues. When she observed how engineers followed the definition of done, she found that some of them strictly followed what was prescribed, and some did not. This is what we call weak feedback loops when certain feedback can be skipped or its result ignored.
The adverse effect of weak feedback are:
- Accumulation of the quality debt
- Slow down delivery because of unplanned work later
To address such a situation, there were a lot of options. We need to shift left the feedback collection and run it as early as possible and make it a mandatory quality gate. In Alice’s case, it was possible to introduce SCA as a part of pull request verification and impossible to approve the merge if issues were not resolved or enforce such feedback after the merge. The successful mitigation strategy is quality gate enforcement; however, its straightforward introduction with the accumulated debt might lead to the pushback from the business side; it takes time to clean up the accumulated debt and wasted churn. We would recommend incremental enforcement of the quality gate as capabilities improve.
Another aspect to take into account is when we are introducing a quality gate on the pull request level before the code even merges into a product – the infrastructure cost. The more engineers you have the higher frequency of the pull request you will have the more robust and scalable infrastructure to run all required feedback activities you need to have. Fragile infrastructure will lead to a noise problem; and therefore, push back on the team to make sure you get beyond weak feedback. As a part of a strategy to address weak feedback, make sure that your feedback noise is mitigated and the infrastructure is reliable.
In the conclusion of these four articles, we would like to reiterate the importance to look at the digital product delivery work organization through the prism of feedback loops, especially:
- what quality attributes are important
- how fast you can deliver quality feedback
- how accurate reflective it is, and
- how you manage impacts and dependencies in case of cascaded feedbacks.