A lot of people say yes, my personal experience says not.
My personal experience tells me that the most effective and efficient approach uses a test coach that slowly makes himself scarce. My experience focuses on creating competencies for completing an activity (testing) in a collaborative context.
One aspect I am finding difficult to explain is the impact on queues of a test coach approach, I will use a series of Scenarios and pictures to facilitate my reasoning.
If you feel like substituting activity A = development and activity B = testing feel free, but this approach is activity agnostic in my experience as I have applied it to analysis, and development obtaining similar results.
Scenario 1 – Test specialist
In the presence of one specialist on Activity B and many specialists on activity A. Problem: Long queues form in Ready for Activity B (Case1) or worse specialist multitasks on many activities at the same time slowing down flow of work as per Little’s law (Case2)
Scenario 2 – Test coach
Stage 1: In the presence of one coach on Activity B and many specialists on activity A Initially coach pairs on Activity B with people that do activity A. This way we obtain 2 benefits:
Queue in “Waiting for Activity B” is reduced as one person normally performing activity A is busy pairing with coach on one activity B task
By pairing on activity B feedback loops are shortened
Person with activity A acquires new skills to perform activity B from pairing with coach
Quality of activity B increases as it is a paired activity
Flow improves because of 1 and 2
Stage 2: When cross-pollination of skills required for activity B starts to pay off, we have 2 benefits
Normally some activity A person will show particular abilities with activity B, in this case this person can pair with another less skilled activity A person to perform activity B
Queue in “Waiting for Activity B” is reduced as more people with activity A skills are performing activity B
Flow of value improves lead time decreases
More activity A people get skills on activity B
Stage 3: All activity A people are able to perform activity B Activity B Coach can abandon the team to return only occasionally to check on progress. Benefits:
Activity A and activity B can be performed by every member of the team
the WIP limit can be changed to obtain maximum flow and eliminate the queue in Ready for Activity B.
The flow of value is maximised
The lead time is minimised
WARNING: I have applied this approach to help teams with testing for many years. It has worked in my context giving me massive improvements in throughput and reduction of lead time. This is not a recipe for every context, it might not work in your context, but before you say it won’t work, please run an experiment and see if it is possible.
This is not the only activity that a good test coach can help a team on, there are many shift left and shift right activities that will also reduce the dependency on activity B.
I have been told a million times “it will never work”, I never believed the people who told me and tried anyway, that’s why it worked.
Try for yourself, if it doesn’t work, you will have learned something anyway.
The owner of a window shades company, was asked by a business consultant “What business are you in?” He initially answered “We are in the window shade business”. Then he was asked “When someone walks into your store, why do they want a window shade? What are you really selling?” he had to pause and think before he saw it
“We’re in the light-control and privacy business! – not the window shade business.”
The implications of this discovery were that by understanding the company real purpose, he was able to introduce new products that his customers loved.
Reading this story reminds me of the struggle of the testing community to get buy in from business owners. Years and years of hearing complaints that business owners don’t understand the importance of testing, they don’t appreciate the hard work of testers, I have grown tired of it.
I believe it’s not the business owners not to understand testers, I believe it is testers not understanding the business they are in.
The majority of the testers I know believe that they are either in
“The business of finding bugs”
“The business of sourcing information to help make decisions”
The testers I want to work with are in “the business of delighting their customers”, that is what we all need to be in.
If we make our business “delighting our customers” believe me, we will find a lot of innovative ways to do it as testers.
Forget about defects, forget about information, focus on delighting your customers and
you will find innovative ideas to improve the quality of your product
the business owners will love your work – and most of all
A defect is anything that threatens the value of the product.
Before we start, let’s agree that:
we don’t want defects that threaten the value of our product
we want to give our customers as much value as possible at all times.
If you don’t agree with 1 and 2 then, don’t waste your time and stop reading now.
Testers are normally associated with finding defects. Some testers get very protective with the defects they find and some developers can be very defensive about the defects they wrote. Customers don’t like defects, developers don’t like defects, product managers don’t like defects, let’s be honest, nobody likes defects besides some testers.
Why would that be? The reason is that the focus of a lot of testers is on detecting defects, and that’s what they get paid for in a lot of organisations. If you are a tester and love your defects, you might find this article disturbing, if you decide to proceed, do so at your own peril.
Defects are waste
Let’s be clear from the start: defects are waste. Waste of time in designing defective products, waste of time in coding defective routines, waste of time in detecting them, waste of time in fixing them, waste of time in re-checking them. Even writing this sentence took a good while, now think how much time it takes you to produce, detect, fix, recheck defects.
Our industry has developed a defect coping mechanism that we call defect management. It is based on a workflow of detecting => fixing => retesting. Throughout the years it has become best practice (sic) to have defect management tools and to log and track defects. Defect management approaches are generally cumbersome, slow, costly and tend to annoy people no matter whether you are a tester that gets his defect rejected, you are a developer that gets a by design feature flagged as defect, or a product manager that needs to spend time prioritising, charting and trending waste.
Another dangerous characteristic of defects is that they can be easily counted and you will always find a pointy haired manager that decides he is going to shed light on the health of his product and on the efficiency of his team by counting and drawing colorful waste charts.
But, if we agree that defects are waste, why are we logging and tracking waste, creating waste charts seems even more ridiculous, wouldn’t it be easier to try to prevent them?
Oh, if only we could write the right thing first and reduce the number of defects we produce! I say we can, be patient and read on.
Software development teams have found many ways of being creative playing with defects, see some examples below.
Example 1:Reward waste
Some years back I was working on a business critical project in one of 5 scrum teams . Let me clarify first, that our scrum implementation was at best poor, we didn’t release every sprint and our definition of done was questionable.
Close to an important release, we found ourselves in a situation where we needed to fix a lot of defects before going into production. We had 2 weeks and our teams had collectively around 100 defects to go through. Our CTO was very supportive of the defect killing initiative and he was eager to deliver with zero defects. He put in place a plan that included free food all day and night and some pampering for the developers that needed to focus 100% on defect resolution. Then he decided to give a prize to the team that would fix the highest amount of defects.
I remember feeling frightened of the possible future consequences of this reward. I spoke to the CTO and told him that I would have liked more a prize for the team that produced the lowest amount of defects rather than the one that fixed the most. Our CTO was a smart guy and understood the value proposition of my objection, he changed his approach and spoke to the teams on how not introducing defects in the first place is much more efficient than fixing them after they have been coded. Soon after the release, we started applying an approach that focussed on preventing defects rather than fixating on detection. We never had the problem of fixing 100 bugs in 2 weeks again.
Example 2:Defects metrics
In my previous waterfall life, I remember when management introduced a performance metric directly linked to defects. Testers were to be judged on the Defect Detection Index calculated as (Number of Defects detected during testing / Total number of Defects detected including production)*100. An index lower than 90 would mean nobody in the test team would get a bonus. Developers were individually judged on the number of defects found in their code by the testers and business analysts were individually judged on the number of defects found by the testers in their requirements.
Welcome to the battlefield!
The bug prioritisation meetings were battles where development managers argued any bug was a missed requirement, product managers argued that every bug was a coding error or a tester misunderstanding and the test lead (me) was simply shouted at and criticised for allowing his testers to go beyond the requirements and make use of their intellectual functions outside a scripted validation routine.
Going to that meeting was a nightmare, people completely forgot about our customers and simply wanted to get their metrics right. The amount of time we wasted arguing and defending our bonuses was astonishing. Our customers were normally unhappy because instead of focusing on value delivery we focussed on playing with defects, what a bunch of losers we were!
Our customers were very unhappy.
Example 3:Defects as non-conformance to requirements
In the same environment as Example 2, testers, in order to keep their Defect Detection Index high used to raise large amounts of minor or non-significant “defects” that were in reality non-conformance to requirements. Funnily enough such non-conformances were generally improvements.
Testers didn’t care if they were requirements, code defects or even improvements, to them they were money, so they opened them. Improvements were filed as defects as they were in non-conformance to requirements. In most of the cases, these were considered to be low severity and hence low priority defects to make the testers happy and had to be filed, reviewed, prioritised and used in trends, metrics and other useless calculations.
This activity could easily take 30% of the tester time. Such defects would not only take testers’s time, but would also affect developers, product managers, business analysts and eventually clutter the defect management tool.
Waste that creates waste, exponentially, how wonderful.
Example 4:Defect charts, trends and other utter nonsense
Every week I had to prepare defect charts for management. These were extracted from our monstrous defect management tool and presented in brightly coloured useless charts. My manager got so excited at the prospect of producing useless information that she started a pet project to create charts that were more colourful than the ones I presented. She used 2 developers for 6 weeks to create this thing that was meant to wow the senior executives.
In the process of defining the requirements for wowing the big guys, she introduced a few new even more useless charts and consolidated it into an aggregating dashboard. She called it the product quality health dashboard, I secretly called it the dump.
Nobody gave a damn about the dashboard, nobody used the numbers for any reason, nobody cared that they could configure it, but my boss was extremely proud of it. A legend says that she got a big raise because of it. If you play with rubbish, then you will start measuring rubbish and eventually you will end up doing data analysis and showing a consolidated view of the rubbish you store in your code.
How can we avoid this?
1. Focus on defect prevention
Many development teams focus on delivering features fast with little consideration for defect prevention. The theory is that testers (whose time is sometimes less expensive than developers) will find the defects that will be fixed later. This approach represents a false economy; rework disrupts developers activities and harms the flow of value being delivered. There are many approaches available to development teams to reduce the amount of rework needed.
Do you want to prevent defects? You can try any combination of the below:
With BDD/ATDD/Specification By Example or other test first approach, delivery teams test product owners assumptions through conversations and are more likely to produce the right feature the first time.
The ability to have fast feedback loops also allows for early removal of defects, automated unit and integration tests can help developers quickly identify potential issues and remove them before they get embedded into a feature.
Tight collaboration between business and delivery teams helps teams be aligned with their real business goal and reduce the amount of unnecessary features. This means less code and as a consequence less defects. Because, your best piece of code is the one you won’t have to write.
Reducing complexity is very powerful in preventing defects, if we are able to break down a complex problem in many simple problems we are likely to reduce the amount of defects we introduce. Simple problems have simple solutions and simple solutions have less defects than complex ones.
Good coding standards like for example limiting the length of a method to a low number of lines, setting limits on cyclomatic complexity, applying good naming conventions to help readability also have a positive impact on the number of defects produced
Code reviews and pair programming greatly help reduce defects
Refactoring at all times also reduces defects in the long run
Moral of the story: If you don’t write defects, you will not have to fix them.
2. Fix defects immediately and burn defect management tools
If like me years back, you are getting tired of filing, categorising, discussing, reporting, ordering defects I have a very quick solution. Fix the defects as soon as you find them.
It is normal for a developer to fix a defect he finds in the code he is writing as soon as he finds it without having to log it, but as soon as the defect is found by a different individual (a tester for example) then apparently we need to start a strict logging process. Why? No idea really. People sometimes say: “if you don’t do root cause analysis you don’t know what you are doing, hence you need to file the defects”, but in reality nobody stops you from doing root cause analysis when you find the defect if you really want.
What I am suggesting is that whoever finds a bug walks to a developer responsible for the code and has a conversation. The consequence of that conversation (that in some cases can involve also a product owner) should be let’s fix it now or let’s forget about it forever.
Fixing it now, means normally that the developer is fresh on the specific code that needs to be fixed, surely fresher than in 4 weeks, when he won’t even remember he ever wrote that code. Fixing it now means that the issue is gone and we don’t have to worry about it any longer, our customer will be thankful.
Forgetting about it forever means that it is not an issue worth fixing, probably it doesn’t threaten the value of the project and the customer won’t care if we don’t fix it. Forgetting about it forever also means that we won’t carry a stinky dead fish in a defect management tool. We won’t have to waste time re-discussing the same dead fish forever in the future and our customers are happy we are not wasting time but working on new features. If you decide to fix it, I’d also recommend you write an automatic test for it, this will make sure that if the issue happens again you’ll know straight away.
I have encountered huge scepticism when suggesting to burn defect management tools and fix just in time. Only very few seem to think this is possible. As a matter of fact all my teams were able to do this for the last 6 years and nobody ever said, “I miss Jira and the beautiful bug charts”.
Obviously this approach is better suited for co located development teams, I haven’t tried it yet with a geographically distributed team, I suggest you give it a try and let me know how it goes.
Playing with defects waste index:
Epidemic: 90% – The only places that don’t file and manage defects I have ever encountered are the places where I have worked and have changed the process. In the last couple of years, I have heard of two other places where they do something similar but that’s just about it. The world seems to have a great time in wasting money filing, categorising, reporting, trending waste.
Damaging: 100% – Using defects for people appraisal is one of the worst practices I have ever experienced in my long career, the damage can be immense. The customer becomes irrelevant and people focus on gaming the system to their benefit. Logging and managing defects is extremely wasteful as well, it requires time, energy and can among other things, endanger relationships between testers and developers. Trending and deducting release dates from defect density is plain idiotic, when with a little attention to defect prevention defects would be so rare that trends would not exist.
Resistant: 90% – I had to leave one company because I dared doubt the defect management gospel and like an heretic I was virtually burned at the stake. In the second company I tried to remove defect management tools I was successful after 2 years of trying, quite resistant. The third one is the one where people were happy to experiment and as soon as they saw how much waste we were removing it quickly became the new rule. I have had numerous discussions with people on the subject and the general position is that defect management must be done through a tool and following a rigid process.
A classic problem for testers in agile contexts is the fact that they feel they are not listened to by developers. Testers, often rightly, warn developers from doing stuff because the consequences could be very bad, but developers in many cases don’t listen to them.
This is very upsetting, testers find themselves lonely within an agile team because of this. They get frustrated and if they keep on asking and screaming their needs they risk being alienated by their team members becoming completely ineffective while growing dissatisfied with their job.
But, but, they are right in telling the developers what to do! WTF?
When working as a tester in an agile team you’ve got to develop a skill that before you really didn’t need that much.
It is called influencing. Before when you worked in your separate QA department/test team you didn’t need it because there was a test manager that was fighting the battles for you and deciding the test strategy to be applied.
Things have changed, you’ve got to become good at influencing.
This is what Gus did when he moved to an agile team as a tester many years ago.
First I tried barking orders, screamed and shouted, but it din’t work at all, so I decided to adopt a different approach.
I started to really listen to what developers said instead of listening for finding gaps in their thought
I listened and listened and listened a bit more
Third I started asking questions showing real interest in what they were doing. Being mindful of their fears and feelings. I made sure they knew I was there to help and that we were all in the same boat
I started praising them when they did something good, for example thanking them for adding testability to the application. Things like “without Roberto’s design I would have spent weeks doing what I can do now in 10 minutes, thank you so much Roberto, you made my life better”
I started coaching them on how to test by testing with them. When they saw what testing really involved, they understood its importance and challenges and started asking interesting questions about it.
Who will developers listen to?
Now compare the developers’ reaction when faced with a suggestion raised by Gus to the one that Jack gets. Jack is a tester that uses the classic approach of “what the hell are you talking about? This is going to explode in production!”
Who do you think will be able to influence developers actions when something important for testing needs to be done?
Me? Often, I always got the developers at least to listen to me and a lot of the times we did it my way unless somebody in the team had a better idea.
So, do you want to be Jack and keep on moaning about developers that don’t understand anything about testing?
If I were you I’d take Gus’s approach and build your influence within your team, start now, start listening.
I love golf, I am quite atrocious at it, but still love the game.
According to legend, back in the 60’s, South African golfer Gary Player was playing a round with a friend. At the very start of the round, he sank two consecutive extremely long putts and got 2 birdies.
After his second long putt, his playing partner sad: “Wow, you are lucky!”.
Unfazed Gary replied “I am a great believer in luck, the harder I work the luckier I get”
What’s this got to do with the price of turnips in Termonfeckin, you might ask. Let me explain.
Just yesterday, I was discussing one of my previous blog posts with a tester I really admire and respect. The discussion was around whether developers can be taught testing and whether they really want or care to work with testers on improving their skills.
My experience says “Yes” and “Yes”, my interlocutor had a different opinion. Perfectly fine.
At a certain point he said, and I quote “you’ve been in a great position to have people with the right mindsets, eager to learn and conduct testing activities”
I heard that before, I actually heard it too many times, in different forms.
In the last 10 years I was told: “you are lucky to be in that situation” or “you were fortunate to have the right people around you” and also “you have had the fortune of not having my situation where developers don’t care about testing, blah, blah blah…” and just about any other way of saying that I WAS LUCKY.
Honestly, I don’t think I have been lucky. Could I be lucky every time? Luck is about chance? I understand maths and getting lucky every time sounds quite, let me think, unlikely.
I made my own luck.
I learned to treat developers with respect, show them my appreciation for their work, empathise with their fears and expectations. When in this context, developers, or for that matter any category of people, become more interested in our own needs, more willing to help and learn something you might be teaching.
If we keep on saying that developers can’t test, that if there are no testers the world is going to end because developers are not able to do their job properly, how do we expect to have them listen to us and help us? First we shut them out of our little testing world and then we expect they want to learn what we do and help us, this is just not reasonable.
Funnily enough us, testers, are the biggest moaners of them all, always complaining that the industry doesn’t value us like it values developers, but we still don’t understand that empathy is important to get people to be on your side.
Don’t take my word on this, I am just a lucky man.
In my last stint as a tester from October 2012 to Jan 2014, I helped my organisation at that time moving from delivering once every month, to delivering multiple times a day.
Let me first clarify that we didn’t move to multiple deliveries per day just for the fun of it, but because we needed it.
Your organisation might not yet know it needs this level of agility but more than likely it will at some stage in the future.
How did this transform the role of the testers within the organisation?
When I joined I found scrum teams that delivered either once a month or once every 2 months. The teams had 3 different defects management databases full with old and new defects. Testers were doing the following activities:
exploratory testing (~50-70&)
The batches were big, the exploratory sessions were long and found a lot of defects. The automation was not effective, as it was slow and unpredictable, its value was negative.
When I left
When I left, we were using kanban, delivering multiple times a day, defects were more or less a myth of the past, no defect management tool existed. Testers were doing the following activities:
Three amigos BDD sessions with customers and developers
Exploratory testing (1~5%) – never longer than 10 minutes per card, more often than not reporting no defects
Pairing with developers
Coaching developers on testing
Writing automation (0%)
Talking to the customer and the team
Improving the system
Designing the product with the team and the customer
Helping define what to monitor in production
Any other valuable activity the team needed them to do
As you can see the activities that before occupied 100% of testers time, now occupy from 1 to 5% of testers time.
Were testers busy before? Yes, absolutely
Were testers busy after? Yes, absolutely
Were testers complaining because they weren’t doing automation or enough exploratory testing? No, believe me. Most testers I worked with saw the new activities in the role as a learning activity and an opportunity to broaden their skills and become more valuable to any company.
If a tester didn’t want to adapt to the new reality and embrace the change and new ways of doing things, he would have been busy for 10 minutes a day (~2%) and he would have not been useful to the team.
Did we get there with the touch of a magic wand? No, the end stage was the result of many experiments. It was, back then, a good recipe for that context at that time (it is continuously changing)
So, tester, what’s your strategy for working in a company that releases multiple times a day?
Let’s see what kind of tester you are, answer the 3 questions below and find you profile description at the bottom.
You start testing and the product has obvious problems even with the “happy path”, both cosmetic and very obvious bugs are everywhere, what do you do?
I send the product build back and don’t touch it because it is not testable and back to facebook for the day!
I log 124 bugs in extreme detail, one for every problem I encounter. If somebody after 3 days asks me why I am not finished yet, I respond that I am logging bugs and testing is not complete.
I go talk to the developer that made the last check in and ask him why he didn’t test his product. If he says, “because you are supposed to do that” I talk to him about the incredible amount of waste he is creating by delaying the feedback on the product and not testing it himself. I sit down with him, ask him to test the product and fix the issues we find together in a continuous loop until we both are quite happy with the result.
I then explain to him that if the feedback for each of the issues has to come from a different party (me) the delays in the detect fix retest rate will create a massive amount of waste. I encourage him to test his application before sending it to me and as we are at it I suggest the next time we test it and fix it together.
I tweet to the world how idiotic my developers are and include screenshots of the errors. I spend 2 hours defending my point on social media and catching my interlocutors on inappropriate use of terms to prove my point.
You have been sitting idle on your chair for over an hour because there is a delay in the build you want to test, what do you do?
Yippee I have time to browse the web and watch some videos on youtube! I hope their problem persists for a while, the new Game of Thrones season is awesome!
I look at my test plan, change it a bit, add some graphics, get a coffee and start reading that testing book I always wanted to read.
I have been out of my desk already for 50 minutes. 10 minutes idle time is waste, our customers are getting our product later than they should.They say there is a problem with the build, let me see if I can help out. I go and offer my help to my teams’ developers. They might need me to rebuild a machine to check something or to try something quickly to debug the problem. I get stuck into it, wow, I even learn something about it! I might even resolve the problem altogether using my strong critical thinking skills and looking at the problem from a different perspective. I help, we solve the problem and, yes, now I can test it! Chances are that while troubleshooting the problem I have already gathered a lot of information on the product that will support and supplement my testing.
If developers are idiots there is nothing I can do, I will spend this time blogging about how to become a better tester and how testing is the most important thin g in the universe, much more complex than development by the way.
Developers have nothing to do because the analyst has been sick and there are no refined user stories in the backlog, you just finished testing, what do you do?
BINGO! I might as well re-start House of Cards
I start complaining and moaning about how this company sucks and about how better it would be if I was the decision maker here. We need to hire more analysts and testers of course! I then go back to writing more test cases even if there is nothing to test. I might also create some fancy SQL query to extract lovely bug lists to show to management with graphs and all.
Great opportunity for learning a bit more about how business analysis works. In the previous weeks I have been helping the analyst and I think I can get the job done. I organise a meeting with the team and we start breaking down user stories, as soon as we have a couple ready we can start working. I will replenish the backlog until the flow is reestablished, using WIP limits will help me understand how much work I need to do on this.
There is nothing to do, if development activities are foreign to me, imagine how I feel about doing something that is even further away from my testing world. I am here to find bugs and provide information to stakeholders, not write user stories, hire more analysts if you don’t want this to happen.
If most of your answers are 1, you are a slacker that happened to be doing testing. Why don’t you find a job you enjoy doing instead of trying to avoid the one you have?
If most of your answers are 2, you are an old time tester that spent the last 15 years in a bubble far away from the evolving world. Nothing outside your little testing world is worth consideration, after all you know better than anybody else and why should you improve?
If most of your answers are 3, keep on doing what you’re doing, you and your company will be OK
If most of your answers are 4, I know who you are 🙂