Reasons why some software delivery teams don’t give a damn about their customers
It feels like a century ago, but once upon a time, less than a century ago, I was leading a traditional test team in an organization where 3 separate teams of Business Analysts, Developers and Testers were delivering software in an incremental iterative death march style. Each of the 3 teams had its own leads and managers and each of the 3 teams was measured by specifically tailored metrics. My team’s efficiency was to be measured based on the mighty DDI Defect Detection Index calculated as DDI = (Number of Defects detected during testing / Total number of Defects detected including production defects)*100. The DDI had to be greater than 90% otherwise our team would have been deemed inefficient, bonuses dropped and the test team itself branded as a bunch of losers.
Yes you guessed right, the other 2 teams were measured in a similar way, their efficiency was also based on number of defects, the lowest the better.
God I am glad this is only in the past. Even remembering this makes me sick in the stomach. Sick like every time a production defect was detected, sick like every time a defect that our team detected was rejected, sick like every time I had to go to the triage meetings and inevitably have an argument either with the BA lead or the DEV one because that defect that we found was not seen as a possible improvement for the product but as a threat to some team’s metric. I’m not even going to describe to you the awful discussions that followed the acceptance of a defect as valid when a decision had to be made on whether the defect was due to bad requirements or bad code.
The funny thing was that no matter which was the efficient team and which were the inefficient ones, the software delivered was the same, no change whatsoever, the customers were constantly quite unhappy. The real value that the metrics gave to the department was the ability to point fingers based on numbers. They say numbers never lie, maybe numbers don’t lie but how many lies can we tell to fabricate numbers?
Since then many things have changed in my professional life and today I don’t have to fight stupid battles to fabricate numbers in order to define efficiency so I can, funnily enough, use my time more efficiently.
Why calculating confrontational metrics doesn’t work? The problem is in the fact that we are humans; if you attach prestige and monetary value to a metric, the metric becomes the goal of the team and the battle can begin. The test team doesn’t care how useful the product delivered is, all they care is opening as many defects as possible so that the mighty DDI doesn’t go under 90%, if this means opening defects that are absolutely no harm to the customer but only to the development team and the schedule it doesn’t matter. The same logic can be applied to development and the BA teams that will spend their time obfuscating their requirements and defending their code from the stupid defects opened by the test team. All this creates a climate of tension, distrust and hostility. Nobody really cares whether the customers are happy as long as the individual teams metrics solemnly declare their efficiency and fingers can be rightly pointed :-(.
The funny thing is that it is very easy to resolve this problem and put the focus back on the customer. Create a cross functional self organising team able to analyse, develop, test and deliver a complex software project and judge the team on how well they satisfy the customer needs. The team lives as one, produces quality as one, delivers customer value as one, succeeds as one or fails as one. The goal of the team matches the goal of the company and failure or success of the team determines failure or success of the company, it’s called agile team, try it out!