Is agile Alive? Dead? Misunderstood?

Lats Sunday, after reading multiple “Agile is Dead” articles, I posted this short update on linkedIn

screen-shot-2017-03-05-at-11-03-37

As you can see from the stats, in less than 7 days it has received a lot of attention (I am not an influencer and my updates generally do not receive such large feedback)

My contacts in linkedIn include a lot of Agile or Lean Coaches and as expected, initially the message got some positive comments. Soon after some agile detractors joined the conversation and made it much more interesting as generally feedback that comes from different perspectives enriches the conversation adding dimensions that sometimes cannot be expressed by a biased mind.

I noticed 3 interesting trends in the messages.

  1. When agile is not driven by technology, agile fails
  2. When agile is driven by technology and not the business stakeholders, agile fails
  3. Agile is only useful to deliver something nobody wants quickly

The first 2 are extremely interesting, in fact they say exactly the opposite thing but they both come to the same conclusion “agile is dead”. I read and reread those messages and then I saw it

If agile is driven by one part of the organisation, whichever it is, and trust is not built within the whole organisation, it will fail. Do it like this and agile is dead before you even start.

If you try to own something that will change your organisation and run with it, you better make sure you share your vision, your responsibilities and your success with the rest of the organisation. How do you expect people outside your little world to want to follow you in this difficult change if they don’t know, understand, own and help you change. Agile/lean transformations are not driven by a department, they are driven by the whole.

And the result might be that you even stop talking about departments and only talk about the whole.

Now on objection #3.

Agile is only useful to deliver something nobody wants quickly

I have seen this very often and honestly makes me sad. A lot of scrum implementations have a Product Owner that is seen as the heart of the product, the person that understands the vision of the product and that takes the responsibility to take the important decisions for the future of the product in regards to strategy, prioritization and so on.

If you look at it this way, you might think that the PO is a single point of failure, in fact what if he is not able to make good decisions, how about his bias, is he a dictator?

As an agile coach I make sure that any product owner that works with me will have the tools for making good decisions. He in fact will know how to manage flow using WIP limits, he will be aware and become proficient in UX techniques, he will learn how to monitor, gather and use feedback from his customers, he will understand the importance of small experiments, he will be aware of cost of delay and when prioritising his features and user stories will have access to many advanced prioritization techniques.

Being agile does not mean automatically ignoring lean startup, lean UX, research. No that is not being agile, that is being a scrum master after 2 days training.

 

Advertisements

Stop building a Shitload of products, you are killing your company

idreamoforganizations0awherepeopleareempowered0atodeliverproductsthat0amatterjoyfulorganisations0a28-defaultI took my first steps in technology over 20 years ago. If we exclude the year off I took in 2006 I have always been employed in many different companies and I can say with certainty that I worked on a “shitload of products” and I use that specific term with purpose, read on to find out.

During all these years, being somebody that loves learning, I ended up working as system analyst, developer, tester, business analyst, manager, leader, coach, change agent, plus some other short term hats.

If I exclude the last 6-7 years where I had the skills and the ability to influence decisions I can go back and say for sure that the products that were initially envisioned, before any customer feedback was used, can be called a shitload of useless stuff and one or two good ideas that resolve real customer problems.

Very often the product envisioned was very similar to the product that we delivered.

Am I saying that in my first 13 years I worked I mainly produced waste?

Pretty much YES.

Another thing I remember in those early years was that I was never in a team where we could say, oh thank god we are busy but it’s not too bad, we can do our work, go home and have a balanced life. Invariantly there was pressure. We need all this by that date, come on! Work faster!

To me, it always felt like we were told by Dilbert’s boss that if we shove some more paper in the printer it will print faster. Also, when we were not busy, then managers will fill our capacity with a new shitload of useless products created for the purpose to make people sweat, very often no thought on the customer whatsoever.

Am I saying that trying to deliver fixed scope, fixed date shitloads of products creates big problems to the workers that build them?

YES, not even the pretty much is needed this time.

Another thing that I have noticed through the years is that without exception, companies older than 3 years have already built a shitload of products that are now impeding their ability to respond to change and survive. We will call this “shitload of legacy systems”.

This specific shitload is used as the excuse for not being able to change, as if saying “yes sure, we can’t compete with the market and we will die soon, but it’s not our fault it’s the fault of the legacy system (that by the way we built)

Am I saying that the shitload of products are also causing the slow death of the companies that created it in the first place?

Yes

Next time you start a product, think twice before rewarding people for the delivery of all the scope, in time.

I have been helping organisations deliver products that matter to their customer as soon as possible, I am not in the business of delivering projects or shitloads of products.

I am researching new ways of demonstrating THE VALUE in MONEYof the “non built shitload of products and features”, if you are interested, let’s do this together.

 

 

Ultimate Guide to Reducing the Amount of Defects and other Waste in your Product

What’s a defect? I like this definition.

A defect is anything that threatens the value of the product.

Before we start, let’s agree that:

  1. we don’t want defects that threaten the value of our product
  2. we want to give our customers as much value as possible at all times.

If you don’t agree with 1 and 2 then, don’t waste your time and stop reading now.

download
Software defects aka Bugs

Testers are normally associated with finding defects. Some testers get very protective with the defects they find and some developers can be very defensive about the defects they wrote. Customers don’t like defects, developers don’t like defects, product managers don’t like defects, let’s be honest, nobody likes defects besides some testers.

Why would that be? The reason is that the focus of a lot of testers is on detecting defects, and that’s what they get paid for in a lot of organisations. If you are a tester and love your defects, you might find this article disturbing, if you decide to proceed, do so at your own peril.

 

Defects are waste

Let’s be clear from the start: defects are waste. Waste of time in designing defective products, waste of time in coding defective routines, waste of time in detecting them, waste of time in fixing them, waste of time in re-checking them. Even writing this sentence took a good while, now think how much time it takes you to produce, detect, fix, recheck defects.

Our industry has developed a defect coping mechanism that we call defect management. It is based on a workflow of detecting => fixing => retesting. Throughout the years it has become best practice (sic) to have defect management tools and to log and track defects. Defect management approaches are generally cumbersome, slow, costly and tend to annoy people no matter whether you are a tester that gets his defect rejected, you are a developer that gets a by design feature flagged as defect, or a product manager that needs to spend time prioritising, charting and trending waste.

Another dangerous characteristic of defects is that they can be easily counted and you will always find a pointy haired manager that decides he is going to shed light on the health of his product and on the efficiency of his team by counting and drawing colorful waste charts.

But, if we agree that defects are waste, why are we logging and tracking waste, creating waste charts seems even more ridiculous, wouldn’t it be easier to try to prevent them?

Oh, if only we could write the right thing first and reduce the number of defects we produce! I say we can, be patient and read on.

Software development teams have found many ways of being creative playing with defects, see some examples below.

Example 1: Reward waste

Reward for the wrong reason
Reward for the wrong reason

Some years back I was working on a business critical project in one of 5 scrum teams . Let me clarify first, that our scrum implementation was at best poor, we didn’t release every sprint and our definition of done was questionable.

Close to an important release, we found ourselves in a situation where we needed to fix a lot of defects before going into production. We had 2 weeks and our teams had collectively around 100 defects to go through. Our CTO was very supportive of the defect killing initiative and he was eager to deliver with zero defects. He put in place a plan that included free food all day and night and some pampering for the developers that needed to focus 100% on defect resolution. Then he decided to give a prize to the team that would fix the highest amount of defects.

I remember feeling frightened of the possible future consequences of this reward. I spoke to the CTO and told him that I would have liked more a prize for the team that produced the lowest amount of defects rather than the one that fixed the most. Our CTO was a smart guy and understood the value proposition of my objection, he changed his approach and spoke to the teams on how not introducing defects in the first place is much more efficient than fixing them after they have been coded. Soon after the release, we started applying an approach that focussed on preventing defects rather than fixating on detection. We never had the problem of fixing 100 bugs in 2 weeks again.

A typical defect prioritization meeting
A typical defect prioritization meeting

Example 2: Defects metrics

In my previous waterfall life, I remember when management introduced a performance metric directly linked to defects. Testers were to be judged on the Defect Detection Index calculated as (Number of Defects detected during testing / Total number of Defects detected including production)*100. An index lower than 90 would mean nobody in the test team would get a bonus. Developers were individually judged on the number of defects found in their code by the testers and business analysts were individually judged on the number of defects found by the testers in their requirements.

Welcome to the battlefield!

The bug prioritisation meetings were battles where development managers argued any bug was a missed requirement, product managers argued that every bug was a coding error or a tester misunderstanding and the test lead (me) was simply shouted at and criticised for allowing his testers to go beyond the requirements and make use of their intellectual functions outside a scripted validation routine.

Going to that meeting was a nightmare, people completely forgot about our customers and simply wanted to get their metrics right. The amount of time we wasted arguing and defending our bonuses was astonishing. Our customers were normally unhappy because instead of focusing on value delivery we focussed on playing with defects, what a bunch of losers we were!

Our customers were very unhappy.

Example 3: Defects as non-conformance to requirements

Let the nitpicking season start
Let the nitpicking season start

In the same environment as Example 2, testers, in order to keep their Defect Detection Index high used to raise large amounts of minor or non-significant “defects” that were in reality non-conformance to requirements. Funnily enough such non-conformances  were generally improvements.

 

Testers didn’t care if they were requirements, code defects or even improvements, to them they were money, so they opened them. Improvements were filed as defects as they were in non-conformance to requirements. In most of the cases, these were considered to be low severity and hence low priority defects to make the testers happy and had to be filed, reviewed, prioritised and used in trends, metrics and other useless calculations.

This activity could easily take 30% of the tester time. Such defects would not only take testers’s time, but would also affect developers, product managers, business analysts and eventually clutter the defect management tool.

Waste that creates waste, exponentially, how wonderful.

A colourful dump
A colourful dump

 Example 4: Defect charts, trends and other utter nonsense

Every week I had to prepare defect charts for management. These were extracted from our monstrous defect management tool and presented in brightly coloured useless charts. My manager got so excited at the prospect of producing useless information that she started a pet project to create charts that were more colourful than the ones I presented. She used 2 developers for 6 weeks to create this thing that was meant to wow the senior executives.

In the process of defining the requirements for wowing the big guys, she introduced a few new even more useless charts and consolidated it into an aggregating dashboard. She called it the product quality health dashboard, I secretly called it the dump.

Nobody gave a damn about the dashboard, nobody used the numbers for any reason, nobody cared that they could configure it, but my boss was extremely proud of it. A legend says that she got a big raise because of it. If you play with rubbish, then you will start measuring rubbish and eventually you will end up doing data analysis and showing a consolidated view of the rubbish you store in your code.

How can we avoid this?

1. Focus on defect prevention

Many development teams focus on delivering features fast with little consideration for defect prevention. The theory is that testers (whose time is sometimes less expensive than developers) will find the defects that will be fixed later. This approach represents a false economy; rework disrupts developers activities and harms the flow of value being delivered. There are many approaches available to development teams to reduce the amount of rework needed.

Do you want to prevent defects? You can try any combination of the below:

  1. With BDD/ATDD/Specification By Example or other test first approach, delivery teams test product owners assumptions through conversations and are more likely to produce the right feature the first time.
  2. The ability to have fast feedback loops also allows for early removal of defects, automated unit and integration tests can help developers quickly identify potential issues and remove them before they get embedded into a feature.
  3. Tight collaboration between business and delivery teams helps teams be aligned with their real business goal and reduce the amount of unnecessary features. This means less code and as a consequence less defects. Because, your best piece of code is the one you won’t have to write.
  4. Reducing complexity is very powerful in preventing defects, if we are able to break down a complex problem in many simple problems we are likely to reduce the amount of defects we introduce. Simple problems have simple solutions and simple solutions have less defects than complex ones.
  5. Good coding standards like for example limiting the length of a method to a low number of lines, setting limits on cyclomatic complexity, applying good naming conventions to help readability also have a positive impact on the number of defects produced
  6. Code reviews and pair programming greatly help reduce defects
  7. Refactoring at all times also reduces defects in the long run

Moral of the story: If you don’t write defects, you will not have to fix them.

2. Fix defects immediately and burn defect management tools

If like me years back, you are getting tired of filing, categorising, discussing, reporting, ordering defects I have a very quick solution. Fix the defects as soon as you find them.

It is normal for a developer to fix a defect he finds in the code he is writing as soon as he finds it without having to log it, but as soon as the defect is found by a different individual (a tester for example) then apparently we need to start a strict logging process. Why? No idea really. People sometimes say: “if you don’t do root cause analysis you don’t know what you are doing, hence you need to file the defects”, but in reality nobody stops you from doing root cause analysis when you find the defect if you really want.

What I am suggesting is that whoever finds a bug walks to a developer responsible for the code and has a conversation. The consequence of that conversation (that in some cases can involve also a product owner) should be let’s fix it now or let’s forget about it forever.

Fixing it now, means normally that the developer is fresh on the specific code that needs to be fixed, surely fresher than in 4 weeks, when he won’t even remember he ever wrote that code. Fixing it now means that the issue is gone and we don’t have to worry about it any longer, our customer will be thankful.

Forgetting about it forever means that it is not an issue worth fixing, probably it doesn’t threaten the value of the project and the customer won’t care if we don’t fix it. Forgetting about it forever also means that we won’t carry a stinky dead fish in a defect management tool. We won’t have to waste time re-discussing the same dead fish forever in the future and our customers are happy we are not wasting time but working on new features. If you decide to fix it, I’d also recommend you write an automatic test for it, this will make sure that if the issue happens again you’ll know straight away.

I have encountered huge scepticism when suggesting to burn defect management tools and fix just in time. Only very few seem to think this is possible. As a matter of fact all my teams were able to do this for the last 6 years and nobody ever said, “I miss Jira and the beautiful bug charts”.

Obviously this approach is better suited for co located development teams, I haven’t tried it yet with a geographically distributed team, I suggest you give it a try and let me know how it goes.

Playing with defects waste index:

defects

Epidemic: 90% – The only places that don’t file and manage defects I have ever encountered are the places where I have worked and have changed the process. In the last couple of years, I have heard of two other places where they do something similar but that’s just about it. The world seems to have a great time in wasting money filing, categorising, reporting, trending waste.

Damaging: 100% – Using defects for people appraisal is one of the worst practices I have ever experienced in my long career, the damage can be immense. The customer becomes irrelevant and people focus on gaming the system to their benefit. Logging and managing defects is extremely wasteful as well, it requires time, energy and can among other things, endanger relationships between testers and developers. Trending and deducting release dates from defect density is plain idiotic, when with a little attention to defect prevention defects would be so rare that trends would not exist.

Resistant: 90% – I had to leave one company because I dared doubt the defect management gospel and like an heretic I was virtually burned at the stake. In the second company I tried to remove defect management tools I was successful after 2 years of trying, quite resistant. The third one is the one where people were happy to experiment and as soon as they saw how much waste we were removing it quickly became the new rule. I have had numerous discussions with people on the subject and the general position is that defect management must be done through a tool and following a rigid process.

Because “that’s the way we do things here!”

Recommended Reading

Lean Software development – An Agile Toolkit (Mary and Tom Poppendieck)

https://mysoftwarequality.wordpress.com/2015/05/06/little-tim-and-the-messy-house/

https://mysoftwarequality.wordpress.com/2013/09/10/how-i-stopped-logging-bugs-and-started-living-happy/

This is a reviewed and improved version of an article I first wrote in 2015 (old version here)

#NoEstimates a simple experience report

Screen Shot 2016-02-05 at 11.41.20I’ve been practicing #NoEstimates with my teams for the last 2-3 years if you want to know how it worked for us, read below.

First of all an answer to all the people that in these years have been telling meYes, but if you are breaking user stories down, then you are estimating

Not at all #1: There is a fundamental difference in the the way we think when we are estimating a story and when we are trying to break it down into simpler ones. In the first case we focus on “how big is this?” in the second case we focus on “let me understand this well so that I can identify simpler sub-entities”. The result of the second exercise is improved knowledge, in the first case this is not necessarily the case.

Secondly an answer to the people that in these years have been telling meBreaking down user stories is dangerous, you will lose track of the big picture

Not at all #2:  We haven’t lost the big picture in 2 and 1/2 years, I am not saying that it is not possible but I would argue that my factual experience on the field is more valid than your hypothetical worry. And on top of that, there are 2 very underrated but positive effects that comes from breaking down user stories into their smallest possible pieces. Number 1 is the fact that we end up with much simpler user stories; less complexity implies less errors hence less rework. Number 2 is that smaller stories mean smaller size/complexity variability hence higher predictability.

Now don’t get me wrong, breaking down user stories is not easy at first. It takes a lot of patience and perseverance, but once you get good at it, you will see that the benefits strongly outweigh the effort.

Finally an answer to the people that kept telling mePredictability is important, #NoEstimates doesn’t make any sense

Not at all #3: Believe it or not but if you get good at #NoEstimates, due to the practices used in points “Not at all #1 and #2” above your forecast becomes much more accurate. In fact, points 1 and 2 make your delivery more predictable.

NOTE for scrummers: I can understand the frustration of people trying to do #NoEstimates with points 1 and 2 while doing scrum. If you try to break down a large number of stories the further in the future you go the more you will stumble upon unknowns. I practice lean software development and would break down user stories Just In Time. This allows me to work on the next most important thing only (I don’t need to fill up a sprint) We use the learnings from the stories we have just completed trying to reduce speculating on unknowns that will be discovered later.

So I suggest:

1) Delay the break down of user stories at the last responsible moment
2) Stop predicting, be predictable
3) Have fun!

If I want to be treated like a professional I should act like one

Drunk-Surgeon-801916Developers and testers often complain that they are not recognised as real professional experts, I wonder why.
One category of respected professional experts is for example heart surgeons.

Now let’s imagine I go talk to heart surgeon Mike and go

<me> I need heart surgery
<Mike> Sure we will schedule it
<me> No no no, I need it now because I have a dinner appointment tonight
<Mike> OK, that’s fine we’ll do it quickly, I won’t even wash my hands

Would you trust Mike as a real professional expert? I wouldn’t

Now let’s think about a common scenario where Product owner Jeff comes to me (developer) and says

<Jeff> I need the new payment feature
<me> Sure we will schedule it
<Jeff> No no no, I need it by tomorrow, we need to comply with regulation
<me> OK that’s fine, I’ll hack it together for you and I won’t even write tests

How can i expect to be treated like a professional?

Thanks for the idea to Marcello Duarte (@_md)

 

Little Tim and the messy house

The messy kitchen
The messy kitchen

A cute little boy, Tim, lives in a messy house.

In the morning Tim’s mum, Tina, spends an hour looking for rubbish in the house, when she finds some, she writes a note on a piece of paper where she describes the steps that she followed when she found it, and sticks the note in one of 5 different drawers. Each drawer is labelled “Severity 1”,  “Severity 2” and so on down to “Severity 5”.

Tina and Tim’s uncle Bob, meet every evening to discuss the daily findings and after arguing for a good while they agree on how to file the notes written during the day into 5 folders with labels “Priority 1”, “Priority 2” and so on up to “Priority 5”.

Tim’s father, Oleg, every morning picks the folder with label “Priority 1” reads the notes Tina wrote, follows the steps, finds the rubbish and throws it in the bin. He then writes an extra note on the piece of paper saying that he has thrown the rubbish in the bin. If the Priority 1 folder is empty, Oleg picks the Priority 2 folder and follows the same process. Some times Oleg cannot find Tina’s rubbish even when following her written steps, in this case he adds a note saying “there is no rubbish there!”. Sometimes Tina takes it personally and Oleg sleeps in the spare room. Oleg barely ever opens the folders with Priority 3 to 5. Such folders are bursting with new and old notes from many years back.

Tina spends an hour a day rechecking the Priority folders to see if her husband has added his notes. When she finds one, she will follow her own steps to make sure that Oleg has removed the rubbish from where it was as he said he did. If he did it, she will shred the original note, if the rubbish is still there she will add a note at the bottom saying, “the rubbish is still there, please go and pick it up!”. She will spend some more time adding some extra information on how to find the piece of rubbish. Sometimes, while she is tracking some old rubbish she finds some new, in this case she creates another note and adds it to a drawer.

Each piece of rubbish was filed neatly
For each piece of rubbish, a report was filed neatly

From time to time uncle Bob calls around asking for rubbish reports and rubbish removal trends. In these occasions Tina and Oleg spend the night up counting and recounting, moving sorting and drawing before they send a detailed rubbish status report.

Strangely enough, no matter how hard Tina and Oleg work at identifying, filing, removing, reporting and trending rubbish, the house is always full of shit and uncle Bob is always angry. Tim’s parents are obsessed in finding new rubbish but they don’t pay much attention to family members dropping chewing gums on the floor, fish and chips wrapping paper in the socks drawer, beer cans in the washing machine and so on. After all Tina will find the rubbish and following their fool proof process they will remove it!

One day Tim calls her parents and Uncle and sits them down for a chat. He suggests to stop throwing rubbish on the floor and messing up the house so that they can reduce the amount of time spent finding, removing filing and trending the rubbish. He also suggests to get rid of the folders labelled Priority 3, 4 and 5 as nobody has done any work on them and after all the existence of a minuscule speck of dust on the bathroom floor is not going to make their life uncomfortable. He also suggests that Tina calls Oleg as soon as she finds some rubbish so that he can remove it straight away, without the need for adding notes.

Uncle Bob tells Tim that what he says is nonsense, because the family are following a best practice approach for rubbish management and in agreement with Tina and Oleg locks him up in a mental facility.

Everybody lived unhappy ever after.

Have I eventually gone bonkers and started talking nonsense?

No, I haven’t suddenly gone crazy. I am Tim and I want to change the world.

BDD is – BDD is not

socrates11_2074496c
Hang on, this is not the real… Ah, OK, it’s a joke 🙂

 

“I’m the smartest man in Athens because I know that I know nothing.” —Socrates 470-399 BC

BDD stands for Behaviour Driven Development

What BDD is (for me)

1. Conversations

BDD is about conversations

The conversations help us understand what we are trying to build and identify the behaviours of our application

The conversations help us share the knowledge about what we are building

Through the conversations we deliberately discover the behaviour of what we are building and remove some of our first order ignorance

BDD uses continuous feedback for deliberate discovery

The discovery helps reduce the unknowns and deliver software that matters

2. Documenting the behaviour of an application

BDD scenarios (or tests) help document the behaviour of the application we are building as they document the outcome of the conversations

3. The tools

There are a number of tools that allow automating the execution of the scenarios.

4. Testing

BDD tests assumptions through conversations, no other relationship exists.

My conclusion:

“BDD is about conversations and collaboration to generate software that matters”

that means: the conversations generate the software that matters

“Wherever it makes sense, describing the the behaviours in business language through scenarios and automating them will help you produce fast feedback and maintain the application as it grows.”

that means: using scenarios and good engineering practices you can be more effective

If you don’t do point 1 (the conversations) you can produce as many scenarios as you want, automate and run them continuously in a server farm bigger than Google’s but you are not getting much value and in my humble opinion you are not doing BDD.

Liz Kheogh, whose contribution have strongly influenced the evolution of BDD puts it very simply saying:

22havingconversations0aismoreimportantthan0acapturingconversations0aismoreimportantthan0aautomatingc-default

What BDD is NOT (for me)

A recurring problem I have encountered with teams starting to use BDD is the emergence of fallacies where teams conflate the problem that BDD is trying to resolve with other concerns, in particular, tools, artefacts and testing.

I am going to come clean straight away, I have been culpable of this for a good while and learned on my own skin how mixing up concepts can be extremely dangerous. See some of my lessons learned below.

 #Fallacy #1: BDD is Testing and automation

This is a very common problem. Often it originates when somebody (usually a tester) hears or reads something about BDD and starts using Gherkin and BDD scenario style to write their tests. This has usually a honeymoon period in which benefits are reaped because tests are now written in business language hence readable/understandable by everybody. This seems to help communication, but in the long term it actually makes it worse because testers and developers start communicating through scenarios written in Gherkin and stop talking. I have personally done this many many years ago. Hands up I screwed up my team communication badly!

In some cases a decision is made to use BDD scenarios to replace automation tests. This creates a confusion around what scenario should be written for developing some behaviour (BDD) and what scenarios should be written for testing the application. Using all boundary, invalid, special scenarios for development is not optimal, we are not looking for bugs, we are building an application based on its behaviours.

Very often, testers will push for having the scenarios automated through the UI and run in an end to end full integration environment. This generally creates a large slow and non predictable automation suite that is not suited neither for BDD fast feedback loops and discovery nor for end to end integration tests.

When conflating BDD with testing we create unnecessary confusion and end up with things that are neither regression tests nor BDD scenarios and are unusable for their original purpose (development of software that matters).

Do you want to avoid all these problems? Separate BDD from testing. They are 2 solutions to 2 completely different problems. Use the appropriate tools for each domain. Live happy.

#Fallacy #2: BDD scenarios are a communication tool

In some shops I have seen business analysts and product owners that had heard or read of BDD decide that they were going to formalise the requirements into BDD scenarios. I have seen this approach, suggested by a few people that do BDD training, but it is a recipe for disaster.  The most important part of BDD is completely ignored and the PO will elegantly formalise his assumptions in BDD scenarios. Once a developer gets the BDD scenario, she will deliver the PO’s assumptions into elegant code.

Tests for their nature cannot be ambiguous, there is no need for questions or conversations if requirements are defined in the form of tests. This is brandished as a great advantage by some people, instead it is the death of deliberate discovery, and the birth of Assumption Driven Development(*).

Do you want to avoid this? Use 3 Amigos conversations as communication tool. After you’re done, you have all the information to formalise the findings of the conversations into BDD scenarios

#Fallacy #3: We use Cucumber we are doing BDD (Feel free to replace Cucumber with Jbehave, SpecFlow, et cetera)

This is a non sequitur.

Some developers get very excited by neat tools and the ones I mention above are quite cool. Using a tool and writing software through BDD scenarios in the absence of conversations is different from doing BDD. Again, in the absence of conversations, inevitably we end up doing Assumption Driven Development(*).

A similar non sequitur fallacy: “We use Jira, we are agile!”.

 I am looking forward to the day in which I will feel ashamed of what I wrote above, because that day I will have learned something.

(*)Assumption Driven Development (ADD) = A person single handedly builds a set of unquestionable assumptions about the behaviour of an application in the form of tests. Normally this approach fails late, when at exploratory testing, somebody questions the PO’s assumptions now built into code.