I took my first steps in technology over 20 years ago. If we exclude the year off I took in 2006 I have always been employed in many different companies and I can say with certainty that I worked on a “shitload of products” and I use that specific term with purpose, read on to find out.
During all these years, being somebody that loves learning, I ended up working as system analyst, developer, tester, business analyst, manager, leader, coach, change agent, plus some other short term hats.
If I exclude the last 6-7 years where I had the skills and the ability to influence decisions I can go back and say for sure that the products that were initially envisioned, before any customer feedback was used, can be called a shitload of useless stuff and one or two good ideas that resolve real customer problems.
Very often the product envisioned was very similar to the product that we delivered.
Am I saying that in my first 13 years I worked I mainly produced waste?
Pretty much YES.
Another thing I remember in those early years was that I was never in a team where we could say, oh thank god we are busy but it’s not too bad, we can do our work, go home and have a balanced life. Invariantly there was pressure. We need all this by that date, come on! Work faster!
To me, it always felt like we were told by Dilbert’s boss that if we shove some more paper in the printer it will print faster. Also, when we were not busy, then managers will fill our capacity with a new shitload of useless products created for the purpose to make people sweat, very often no thought on the customer whatsoever.
Am I saying that trying to deliver fixed scope, fixed date shitloads of products creates big problems to the workers that build them?
YES, not even the pretty much is needed this time.
Another thing that I have noticed through the years is that without exception, companies older than 3 years have already built a shitload of products that are now impeding their ability to respond to change and survive. We will call this “shitload of legacy systems”.
This specific shitload is used as the excuse for not being able to change, as if saying “yes sure, we can’t compete with the market and we will die soon, but it’s not our fault it’s the fault of the legacy system (that by the way we built)
Am I saying that the shitload of products are also causing the slow death of the companies that created it in the first place?
Next time you start a product, think twice before rewarding people for the delivery of all the scope, in time.
I have been helping organisations deliver products that matter to their customer as soon as possible, I am not in the business of delivering projects or shitloads of products.
I am researching new ways of demonstrating THE VALUE in MONEYof the “non built shitload of products and features”, if you are interested, let’s do this together.
A defect is anything that threatens the value of the product.
Before we start, let’s agree that:
we don’t want defects that threaten the value of our product
we want to give our customers as much value as possible at all times.
If you don’t agree with 1 and 2 then, don’t waste your time and stop reading now.
Testers are normally associated with finding defects. Some testers get very protective with the defects they find and some developers can be very defensive about the defects they wrote. Customers don’t like defects, developers don’t like defects, product managers don’t like defects, let’s be honest, nobody likes defects besides some testers.
Why would that be? The reason is that the focus of a lot of testers is on detecting defects, and that’s what they get paid for in a lot of organisations. If you are a tester and love your defects, you might find this article disturbing, if you decide to proceed, do so at your own peril.
Defects are waste
Let’s be clear from the start: defects are waste. Waste of time in designing defective products, waste of time in coding defective routines, waste of time in detecting them, waste of time in fixing them, waste of time in re-checking them. Even writing this sentence took a good while, now think how much time it takes you to produce, detect, fix, recheck defects.
Our industry has developed a defect coping mechanism that we call defect management. It is based on a workflow of detecting => fixing => retesting. Throughout the years it has become best practice (sic) to have defect management tools and to log and track defects. Defect management approaches are generally cumbersome, slow, costly and tend to annoy people no matter whether you are a tester that gets his defect rejected, you are a developer that gets a by design feature flagged as defect, or a product manager that needs to spend time prioritising, charting and trending waste.
Another dangerous characteristic of defects is that they can be easily counted and you will always find a pointy haired manager that decides he is going to shed light on the health of his product and on the efficiency of his team by counting and drawing colorful waste charts.
But, if we agree that defects are waste, why are we logging and tracking waste, creating waste charts seems even more ridiculous, wouldn’t it be easier to try to prevent them?
Oh, if only we could write the right thing first and reduce the number of defects we produce! I say we can, be patient and read on.
Software development teams have found many ways of being creative playing with defects, see some examples below.
Example 1:Reward waste
Some years back I was working on a business critical project in one of 5 scrum teams . Let me clarify first, that our scrum implementation was at best poor, we didn’t release every sprint and our definition of done was questionable.
Close to an important release, we found ourselves in a situation where we needed to fix a lot of defects before going into production. We had 2 weeks and our teams had collectively around 100 defects to go through. Our CTO was very supportive of the defect killing initiative and he was eager to deliver with zero defects. He put in place a plan that included free food all day and night and some pampering for the developers that needed to focus 100% on defect resolution. Then he decided to give a prize to the team that would fix the highest amount of defects.
I remember feeling frightened of the possible future consequences of this reward. I spoke to the CTO and told him that I would have liked more a prize for the team that produced the lowest amount of defects rather than the one that fixed the most. Our CTO was a smart guy and understood the value proposition of my objection, he changed his approach and spoke to the teams on how not introducing defects in the first place is much more efficient than fixing them after they have been coded. Soon after the release, we started applying an approach that focussed on preventing defects rather than fixating on detection. We never had the problem of fixing 100 bugs in 2 weeks again.
Example 2:Defects metrics
In my previous waterfall life, I remember when management introduced a performance metric directly linked to defects. Testers were to be judged on the Defect Detection Index calculated as (Number of Defects detected during testing / Total number of Defects detected including production)*100. An index lower than 90 would mean nobody in the test team would get a bonus. Developers were individually judged on the number of defects found in their code by the testers and business analysts were individually judged on the number of defects found by the testers in their requirements.
Welcome to the battlefield!
The bug prioritisation meetings were battles where development managers argued any bug was a missed requirement, product managers argued that every bug was a coding error or a tester misunderstanding and the test lead (me) was simply shouted at and criticised for allowing his testers to go beyond the requirements and make use of their intellectual functions outside a scripted validation routine.
Going to that meeting was a nightmare, people completely forgot about our customers and simply wanted to get their metrics right. The amount of time we wasted arguing and defending our bonuses was astonishing. Our customers were normally unhappy because instead of focusing on value delivery we focussed on playing with defects, what a bunch of losers we were!
Our customers were very unhappy.
Example 3:Defects as non-conformance to requirements
In the same environment as Example 2, testers, in order to keep their Defect Detection Index high used to raise large amounts of minor or non-significant “defects” that were in reality non-conformance to requirements. Funnily enough such non-conformances were generally improvements.
Testers didn’t care if they were requirements, code defects or even improvements, to them they were money, so they opened them. Improvements were filed as defects as they were in non-conformance to requirements. In most of the cases, these were considered to be low severity and hence low priority defects to make the testers happy and had to be filed, reviewed, prioritised and used in trends, metrics and other useless calculations.
This activity could easily take 30% of the tester time. Such defects would not only take testers’s time, but would also affect developers, product managers, business analysts and eventually clutter the defect management tool.
Waste that creates waste, exponentially, how wonderful.
Example 4:Defect charts, trends and other utter nonsense
Every week I had to prepare defect charts for management. These were extracted from our monstrous defect management tool and presented in brightly coloured useless charts. My manager got so excited at the prospect of producing useless information that she started a pet project to create charts that were more colourful than the ones I presented. She used 2 developers for 6 weeks to create this thing that was meant to wow the senior executives.
In the process of defining the requirements for wowing the big guys, she introduced a few new even more useless charts and consolidated it into an aggregating dashboard. She called it the product quality health dashboard, I secretly called it the dump.
Nobody gave a damn about the dashboard, nobody used the numbers for any reason, nobody cared that they could configure it, but my boss was extremely proud of it. A legend says that she got a big raise because of it. If you play with rubbish, then you will start measuring rubbish and eventually you will end up doing data analysis and showing a consolidated view of the rubbish you store in your code.
How can we avoid this?
1. Focus on defect prevention
Many development teams focus on delivering features fast with little consideration for defect prevention. The theory is that testers (whose time is sometimes less expensive than developers) will find the defects that will be fixed later. This approach represents a false economy; rework disrupts developers activities and harms the flow of value being delivered. There are many approaches available to development teams to reduce the amount of rework needed.
Do you want to prevent defects? You can try any combination of the below:
With BDD/ATDD/Specification By Example or other test first approach, delivery teams test product owners assumptions through conversations and are more likely to produce the right feature the first time.
The ability to have fast feedback loops also allows for early removal of defects, automated unit and integration tests can help developers quickly identify potential issues and remove them before they get embedded into a feature.
Tight collaboration between business and delivery teams helps teams be aligned with their real business goal and reduce the amount of unnecessary features. This means less code and as a consequence less defects. Because, your best piece of code is the one you won’t have to write.
Reducing complexity is very powerful in preventing defects, if we are able to break down a complex problem in many simple problems we are likely to reduce the amount of defects we introduce. Simple problems have simple solutions and simple solutions have less defects than complex ones.
Good coding standards like for example limiting the length of a method to a low number of lines, setting limits on cyclomatic complexity, applying good naming conventions to help readability also have a positive impact on the number of defects produced
Code reviews and pair programming greatly help reduce defects
Refactoring at all times also reduces defects in the long run
Moral of the story: If you don’t write defects, you will not have to fix them.
2. Fix defects immediately and burn defect management tools
If like me years back, you are getting tired of filing, categorising, discussing, reporting, ordering defects I have a very quick solution. Fix the defects as soon as you find them.
It is normal for a developer to fix a defect he finds in the code he is writing as soon as he finds it without having to log it, but as soon as the defect is found by a different individual (a tester for example) then apparently we need to start a strict logging process. Why? No idea really. People sometimes say: “if you don’t do root cause analysis you don’t know what you are doing, hence you need to file the defects”, but in reality nobody stops you from doing root cause analysis when you find the defect if you really want.
What I am suggesting is that whoever finds a bug walks to a developer responsible for the code and has a conversation. The consequence of that conversation (that in some cases can involve also a product owner) should be let’s fix it now or let’s forget about it forever.
Fixing it now, means normally that the developer is fresh on the specific code that needs to be fixed, surely fresher than in 4 weeks, when he won’t even remember he ever wrote that code. Fixing it now means that the issue is gone and we don’t have to worry about it any longer, our customer will be thankful.
Forgetting about it forever means that it is not an issue worth fixing, probably it doesn’t threaten the value of the project and the customer won’t care if we don’t fix it. Forgetting about it forever also means that we won’t carry a stinky dead fish in a defect management tool. We won’t have to waste time re-discussing the same dead fish forever in the future and our customers are happy we are not wasting time but working on new features. If you decide to fix it, I’d also recommend you write an automatic test for it, this will make sure that if the issue happens again you’ll know straight away.
I have encountered huge scepticism when suggesting to burn defect management tools and fix just in time. Only very few seem to think this is possible. As a matter of fact all my teams were able to do this for the last 6 years and nobody ever said, “I miss Jira and the beautiful bug charts”.
Obviously this approach is better suited for co located development teams, I haven’t tried it yet with a geographically distributed team, I suggest you give it a try and let me know how it goes.
Playing with defects waste index:
Epidemic: 90% – The only places that don’t file and manage defects I have ever encountered are the places where I have worked and have changed the process. In the last couple of years, I have heard of two other places where they do something similar but that’s just about it. The world seems to have a great time in wasting money filing, categorising, reporting, trending waste.
Damaging: 100% – Using defects for people appraisal is one of the worst practices I have ever experienced in my long career, the damage can be immense. The customer becomes irrelevant and people focus on gaming the system to their benefit. Logging and managing defects is extremely wasteful as well, it requires time, energy and can among other things, endanger relationships between testers and developers. Trending and deducting release dates from defect density is plain idiotic, when with a little attention to defect prevention defects would be so rare that trends would not exist.
Resistant: 90% – I had to leave one company because I dared doubt the defect management gospel and like an heretic I was virtually burned at the stake. In the second company I tried to remove defect management tools I was successful after 2 years of trying, quite resistant. The third one is the one where people were happy to experiment and as soon as they saw how much waste we were removing it quickly became the new rule. I have had numerous discussions with people on the subject and the general position is that defect management must be done through a tool and following a rigid process.
I’ve been practicing #NoEstimates with my teams for the last 2-3 years if you want to know how it worked for us, read below.
First of all an answer to all the people that in these years have been telling me “Yes, but if you are breaking user stories down, then you are estimating”
Not at all #1: There is a fundamental difference in the the way we think when we are estimating a story and when we are trying to break it down into simpler ones. In the first case we focus on “how big is this?” in the second case we focus on “let me understand this well so that I can identify simpler sub-entities”. The result of the second exercise is improved knowledge, in the first case this is not necessarily the case.
Secondly an answer to the people that in these years have been telling me “Breaking down user stories is dangerous, you will lose track of the big picture”
Not at all #2: We haven’t lost the big picture in 2 and 1/2 years, I am not saying that it is not possible but I would argue that my factual experience on the field is more valid than your hypothetical worry. And on top of that, there are 2 very underrated but positive effects that comes from breaking down user stories into their smallest possible pieces. Number 1 is the fact that we end up with much simpler user stories; less complexity implies less errors hence less rework. Number 2 is that smaller stories mean smaller size/complexity variability hence higher predictability.
Now don’t get me wrong, breaking down user stories is not easy at first. It takes a lot of patience and perseverance, but once you get good at it, you will see that the benefits strongly outweigh the effort.
Finally an answer to the people that kept telling me “Predictability is important, #NoEstimates doesn’t make any sense”
Not at all #3: Believe it or not but if you get good at #NoEstimates, due to the practices used in points “Not at all #1 and #2” above your forecast becomes much more accurate. In fact, points 1 and 2 make your delivery more predictable.
NOTE for scrummers: I can understand the frustration of people trying to do #NoEstimates with points 1 and 2 while doing scrum. If you try to break down a large number of stories the further in the future you go the more you will stumble upon unknowns. I practice lean software development and would break down user stories Just In Time. This allows me to work on the next most important thing only (I don’t need to fill up a sprint) We use the learnings from the stories we have just completed trying to reduce speculating on unknowns that will be discovered later.
So I suggest:
1) Delay the break down of user stories at the last responsible moment
2) Stop predicting, be predictable
3) Have fun!
Developers and testers often complain that they are not recognised as real professional experts, I wonder why.
One category of respected professional experts is for example heart surgeons.
Now let’s imagine I go talk to heart surgeon Mike and go
<me> I need heart surgery
<Mike> Sure we will schedule it
<me> No no no, I need it now because I have a dinner appointment tonight
<Mike> OK, that’s fine we’ll do it quickly, I won’t even wash my hands
Would you trust Mike as a real professional expert? I wouldn’t
Now let’s think about a common scenario where Product owner Jeff comes to me (developer) and says
<Jeff> I need the new payment feature
<me> Sure we will schedule it
<Jeff> No no no, I need it by tomorrow, we need to comply with regulation
<me> OK that’s fine, I’ll hack it together for you and I won’t even write tests
How can i expect to be treated like a professional?
In the morning Tim’s mum, Tina, spends an hour looking for rubbish in the house, when she finds some, she writes a note on a piece of paper where she describes the steps that she followed when she found it, and sticks the note in one of 5 different drawers. Each drawer is labelled “Severity 1”, “Severity 2” and so on down to “Severity 5”.
Tina and Tim’s uncle Bob, meet every evening to discuss the daily findings and after arguing for a good while they agree on how to file the notes written during the day into 5 folders with labels “Priority 1”, “Priority 2” and so on up to “Priority 5”.
Tim’s father, Oleg, every morning picks the folder with label “Priority 1” reads the notes Tina wrote, follows the steps, finds the rubbish and throws it in the bin. He then writes an extra note on the piece of paper saying that he has thrown the rubbish in the bin. If the Priority 1 folder is empty, Oleg picks the Priority 2 folder and follows the same process. Some times Oleg cannot find Tina’s rubbish even when following her written steps, in this case he adds a note saying “there is no rubbish there!”. Sometimes Tina takes it personally and Oleg sleeps in the spare room. Oleg barely ever opens the folders with Priority 3 to 5. Such folders are bursting with new and old notes from many years back.
Tina spends an hour a day rechecking the Priority folders to see if her husband has added his notes. When she finds one, she will follow her own steps to make sure that Oleg has removed the rubbish from where it was as he said he did. If he did it, she will shred the original note, if the rubbish is still there she will add a note at the bottom saying, “the rubbish is still there, please go and pick it up!”. She will spend some more time adding some extra information on how to find the piece of rubbish. Sometimes, while she is tracking some old rubbish she finds some new, in this case she creates another note and adds it to a drawer.
From time to time uncle Bob calls around asking for rubbish reports and rubbish removal trends. In these occasions Tina and Oleg spend the night up counting and recounting, moving sorting and drawing before they send a detailed rubbish status report.
Strangely enough, no matter how hard Tina and Oleg work at identifying, filing, removing, reporting and trending rubbish, the house is always full of shit and uncle Bob is always angry. Tim’s parents are obsessed in finding new rubbish but they don’t pay much attention to family members dropping chewing gums on the floor, fish and chips wrapping paper in the socks drawer, beer cans in the washing machine and so on. After all Tina will find the rubbish and following their fool proof process they will remove it!
One day Tim calls her parents and Uncle and sits them down for a chat. He suggests to stop throwing rubbish on the floor and messing up the house so that they can reduce the amount of time spent finding, removing filing and trending the rubbish. He also suggests to get rid of the folders labelled Priority 3, 4 and 5 as nobody has done any work on them and after all the existence of a minuscule speck of dust on the bathroom floor is not going to make their life uncomfortable. He also suggests that Tina calls Oleg as soon as she finds some rubbish so that he can remove it straight away, without the need for adding notes.
Uncle Bob tells Tim that what he says is nonsense, because the family are following a best practice approach for rubbish management and in agreement with Tina and Oleg locks him up in a mental facility.
Everybody lived unhappy ever after.
Have I eventually gone bonkers and started talking nonsense?
No, I haven’t suddenly gone crazy. I am Tim and I want to change the world.
The discovery helps reduce the unknowns and deliver software that matters
2. Documenting the behaviour of an application
BDD scenarios (or tests) help document the behaviour of the application we are building as they document the outcome of the conversations
3. The tools
There are a number of tools that allow automating the execution of the scenarios.
BDD tests assumptions through conversations, no other relationship exists.
“BDD is about conversations and collaboration to generate software that matters”
that means: the conversations generate the software that matters
“Wherever it makes sense, describing the the behaviours in business language through scenarios and automating them will help you produce fast feedback and maintain the application as it grows.”
that means: using scenarios and good engineering practices you can be more effective
If you don’t do point 1 (the conversations) you can produce as many scenarios as you want, automate and run them continuously in a server farm bigger than Google’s but you are not getting much value and in my humble opinion you are not doing BDD.
Liz Kheogh, whose contribution have strongly influenced the evolution of BDD puts it very simply saying:
What BDD is NOT (for me)
A recurring problem I have encountered with teams starting to use BDD is the emergence of fallacies where teams conflate the problem that BDD is trying to resolve with other concerns, in particular, tools, artefacts and testing.
I am going to come clean straight away, I have been culpable of this for a good while and learned on my own skin how mixing up concepts can be extremely dangerous. See some of my lessons learned below.
#Fallacy #1: BDD is Testing and automation
This is a very common problem. Often it originates when somebody (usually a tester) hears or reads something about BDD and starts using Gherkin and BDD scenario style to write their tests. This has usually a honeymoon period in which benefits are reaped because tests are now written in business language hence readable/understandable by everybody. This seems to help communication, but in the long term it actually makes it worse because testers and developers start communicating through scenarios written in Gherkin and stop talking. I have personally done this many many years ago. Hands up I screwed up my team communication badly!
In some cases a decision is made to use BDD scenarios to replace automation tests. This creates a confusion around what scenario should be written for developing some behaviour (BDD) and what scenarios should be written for testing the application. Using all boundary, invalid, special scenarios for development is not optimal, we are not looking for bugs, we are building an application based on its behaviours.
Very often, testers will push for having the scenarios automated through the UI and run in an end to end full integration environment. This generally creates a large slow and non predictable automation suite that is not suited neither for BDD fast feedback loops and discovery nor for end to end integration tests.
When conflating BDD with testing we create unnecessary confusion and end up with things that are neither regression tests nor BDD scenarios and are unusable for their original purpose (development of software that matters).
Do you want to avoid all these problems? Separate BDD from testing. They are 2 solutions to 2 completely different problems. Use the appropriate tools for each domain. Live happy.
#Fallacy #2: BDD scenarios are a communication tool
In some shops I have seen business analysts and product owners that had heard or read of BDD decide that they were going to formalise the requirements into BDD scenarios. I have seen this approach, suggested by a few people that do BDD training, but it is a recipe for disaster. The most important part of BDD is completely ignored and the PO will elegantly formalise his assumptions in BDD scenarios. Once a developer gets the BDD scenario, she will deliver the PO’s assumptions into elegant code.
Tests for their nature cannot be ambiguous, there is no need for questions or conversations if requirements are defined in the form of tests. This is brandished as a great advantage by some people, instead it is the death of deliberate discovery, and the birth of Assumption Driven Development(*).
Do you want to avoid this? Use 3 Amigos conversations as communication tool. After you’re done, you have all the information to formalise the findings of the conversations into BDD scenarios
#Fallacy #3: We use Cucumber we are doing BDD (Feel free to replace Cucumber with Jbehave, SpecFlow, et cetera)
Some developers get very excited by neat tools and the ones I mention above are quite cool. Using a tool and writing software through BDD scenarios in the absence of conversations is different from doing BDD. Again, in the absence of conversations, inevitably we end up doing Assumption Driven Development(*).
A similar non sequitur fallacy: “We use Jira, we are agile!”.
I am looking forward to the day in which I will feel ashamed of what I wrote above, because that day I will have learned something.
(*)Assumption Driven Development (ADD) = A person single handedly builds a set of unquestionable assumptions about the behaviour of an application in the form of tests. Normally this approach fails late, when at exploratory testing, somebody questions the PO’s assumptions now built into code.
I recently had a conversation with two colleagues where I was trying to explain the advantages of frequently delivering thin vertical slices of value, versus working on separated components and integrating at the end.
I have been familiar with this concept for so long, and generally take it for granted, but while explaining it, I found myself at a loss and only after a good while I was able to make my point. That conversation prompted me to write this article so that myself or anybody else can use it in the future for this purpose.
Let’s imagine a typical web application with a UI layer, a service layer and a database layer. Let’s consider the very common case scenario, where our developers are specialised and can either do UI or services or databases, no cross skills exist.
There are generally two approaches to deliver this application:
Create component teams that define interfaces between the components and deliver each component in isolation
Create a team that contains all the 3 skills required and deliver small increments of value, i.e. small deliverables that include the 3 layers and are releasable.
The first option, believe it or not is the generally preferred one by most developers and traditional software development managers. It gives the first ones the ability to use their main skill, and the second ones the illusion of efficiency as they think they are using the right skills for the right purpose.
Usually the dreams of component teams are shattered at integration time. Let’s imagine they deliver monthly sprints. Each team at the end of a sprint will believe they have their sprint done, because everything works in their context. What happens in reality is that at integration time, what seemed to be perfectly defined in the interfaces generally is proven incorrect. Things don’t work, teams defend their component design and implementation, blame other teams for the integration failures and nobody takes responsibility for the most important thing of them all: the customer can’t use the value.
By the second sprint, generally the teams decide that they need an “integration testing sprint” [sic] to fix the issues that the teams couldn’t prevent by using predefined interfaces.
This is generally the sign of the start of a slow death of the company ability to deliver customer value.
Time goes by and teams become more entrenched in defending their component and blaming the other components for any failure. Integration testing becomes lengthy and painful and the customer becomes less and less important than defending the status of our component team.
Have you been there? I have, and I bet you have too.
There is a solution.
The secret is to shift, from delivering a component, to identifying a thin slice of value that cuts through all components and the customer can use.
It seems easy, and once you get used to it, it actually is. The problem is the mind shift from system to value. People are used to delivering systems and not to resolving problems, if you want to identify thin slices of value to deliver you need to focus on resolving a problem for your customer abstracting yourself as much as you can from the underlying systems.
The first thing you need to do is to have teams that have all the skills required for the full stack of technology so that each individual team can deliver customer value independently. Like this:
Each user story will deliver a thin slice of value, developers will collaborate and pair instead of communicating through interfaces and the ownership will be on value being delivered instead of software components.
Working like this, the integration issues will appear immediately, as soon as we connect the layers with the first slice of value. This in turn will help us not to build too much software based on wrong assumptions. The first thin slice might be challenging and take some time, but it will contain a huge amount of learning that will be used to reduce the time to build the subsequent slices.
We can expand this same concept to product development.
Let’s imagine our company is an international online wholesale store where clothing shop owners can buy their stock. We now have an opportunity to expand our market by introducing jewellery to our products. The business owners are very excited because they believe that the new product could increase our revenue by 30% and they have identified the suppliers to get cheap stock.
The project Jewellery is starting! Everybody is excited.
After a short period of research we find out that customers living in the Democratic Republic of Regulationstan have some restrictions over the quantity of jewels that can be bought by an individual shop in a period of time. The amount varies depending on the jewels type (gold, silver, platinum). Also, other 10 countries don’t allow watches to be bought in bulk and to make things worse, in the country we live in, Paranoidland, regulation exists for jewels sellers to obtain and store Know Your Customer information for up to 10 years for jewels that contain diamonds.
We could design a complex system that is able to comply with all the limitations and the regulations, implement it and deliver it to be consumed by the whole world in 6 months. Or we could take one category of articles that don’t need to comply to our own country’s regulation and target a rich market that doesn’t impose regulation and deliver it in a week.
The choice will depend on so many context specific factors that I cannot possibly imagine, so I won’t make it.
What I know for sure is that, if we use the second option, there will be a massive difference in the output, the difference is that after one week, our product will receive feedback from its customers. With the customer information available we discover unknowns and making decisions on what to deliver next becomes much easier.
What’s this got to do with component teams and vertical slices teams you might say.
By delivering a vertical slice of value in a short period of time, we are able to evaluate it meaningfully, and use the feedback to decide what and how to build in the next slice. More than likely the first slice of value will give us enough information that makes less important or even completely irrelevant some of the slices we were planning to build in the future.
By doing so, we are deliberately discovering the unknowns and using learning as our guide.
The feedback we receive after each thin slice is our compass, every time we release, we get new directions and make sure we are never too far off the path to success.