Did we get it all wrong?

On Success Measure Vs Bug count and a brand new approach to building Successful products

Back from Potsdam (Germany) where I attended “Agile Testing Days”, I now had 48 hours to reflect on what I saw and heard.

Gojko Adzic presented a concept that I believe could represent a paradigm shift not only in testing but in the whole software delivery approach.

Agile-testing-quadrants

He says that we all got it wrong when applying one of the quadrants of Agile testing because in quadrant Q3 we have been focusing on criticizing the Product based on our internal understanding of how to build a successful product and paying little or no attention to the final customers’ opinion on whether the product is useful and successful or not.

To visualize this, Gojko came up of with a model for software quality that mirrors the Maslow’s hierarchy of needs where the highest level in Maslow’s model (Self Actualization)

corresponds to Successful in Gojko’s Software Quality Model. In this model the lower levels are a necessity for the upper ones to be relevant, i.e. if a product is not Deployable and Functionally OK, we should not care whether it is performant and secure or if it is useful because obviously if we cannot deploy it, it won’t get the chance to perform and be useful, you get the idea.

Looking at the pyramid we immediately realize that as a software delivery team we can only assure the 3 bottom levels of the pyramid and to assure our product is Useful and Successful we need feedback from the final customer. We must involve our final customers in the feedback loop on our products, only they will really know if our product is useful and only they can make it successful or not. Gojko goes one step further and says that when measuring the levels we can apply a different level of focus. Maybe the bottom 2 levels should be delivered to be “good enough” moving up the pyramid we need to aim to “the more the better” as we get closer to Successful.

The most impressive part is yet to come and it is basically Gojko’s approach to measuring the Successful bit of the pyramid. He introduced a strategic planning technique based on 4 questions that he named Impact Mapping. Gojko says “An impact map is a visualisation of scope and underlying assumptions, created collaboratively by senior technical and business people“. In my opinion, the most revolutionary side of Gojko’s thinking is on his focus on behaviour change. In the third question he asks “How should our actors’ behaviour change?”. By focusing on this aspect we are able to visualize the impacts that we want to see as a result of our product/idea.

Using Impact Mapping we are able to visualize and test our assumptions in our path to success. By allowing assumptions testing, Impact Mapping helps find the shortest and cheapest path to product success, not bad at all…

Impact Mapping is a brand new approach and Gojko, says he doesn’t know yet if it will apply to every area of software delivery, it is up to the community now to test it, define applicability boundaries if any and improve it, you can count on me Gojko, I am up for it!

BTW, before you ask, yes I live in the real world.

Advertisements

Test Automation, help or hindrance?

On Slow Vs Fast, Co$tly Vs Cheap and stating the not so obvious

Automation testing is a must for agile teams that want to continuously deliver business value. Does test automation give value to agile teams? Automation testing gives value if it satisfies (at least) two important principles:

1)      Provide fast feedback to developers (SPEED)

2)      Be less expensive than manual regression testing over the application lifetime(CO$T)

SPEED is extremely important. Time is money. People don’t like waiting for something to happen while losing money, developers are no exception. Knowing that we will soon know if our code change worked or not, helps us re-factor that old piece of code that was unmanageable. If you will only know tomorrow that you have broken something, it might be very difficult to fix it because maybe 10 other people have pushed their changes after you and who knows who broke what? Imagine if it took you 1 day to compile your code, would you make that small optimization change? Be honest…

Fast tests give teams great benefit because they tell us straight away “well done!” you’re on the right path, or “hang on you made a mistake, fix it before it’s too late!”.

There are no two ways about it, slow tests are BAD. Developers hate running them because it is a pain, you either wait for them to complete and tell you how you did or you ignore them and go ahead with other changes. Both approaches are bad, while you wait for feedback you are losing money by not being able to code (time=money), if you make other changes you risk burying the “thing” you just broke under more broken code and guess what? You lose money!

CO$T is quite a big issue, isn’t it?

How do you like tests that are brittle and break as soon as something changes in the application user interface? VERY CO$TLY.

How about tests that take ages to run because are highly coupled and necessitate of the full End to End (E2E from now on) test environment to complete? SLOW & CO$TLY!

Slow because they rely on so many systems, as a consequence they keep on breaking but 80% of the times it’s a false positive because some System_XYZ that the test is using somewhere to provide some data was down or Database_ABC was accessed at the same time by another user that messed up the test data. Damn! Rerun the suite again, TOMORROW you will know if it passes, maybe, hopefully, unless something else is broken 😦 SLOW, CO$TLY and worse than everything they become a hindrance to developers because they not only give no value but are counterproductive by wasting their time.

An automation strategy based on E2E tests run through the User Interface that follow the full application workflow has FAILURE written all over. Why? Because E2E UI driven automated tests are SLOW and CO$TLY. Developers hate them and when they catch the odd bug they might not even investigate and follow-up correctly because “…ah sure, it must have been something in the environment, like in the last 35 failures, damn test harness!”. They slowly become noise in the background and after a while nobody cares about them, they are abandoned and the automation effort deemed a failure.

But, hang on, we can avoid this.

Don’t write slow, highly coupled, UI driven, brittle, costly E2E automated tests, do yourself a favour, just don’t.

Yes but we need them, how do we show we verify the acceptance criteria?

Each individual system in a complex architecture can be built to adhere to acceptance criteria, individually. Let’s focus on each system and target our efforts there first. Let’s also automate integration points but let’s not forget about what we are testing when doing integration: we are testing the interfaces only, not the functionality of the other system we integrate to!

On Slow Vs Fast, Co$tly Vs Cheap and stating the not so obvious

Let me introduce you to Augusto’s 4 golden rules of Fast and Cheap automation testing. (yeah, that’d be me)

First – identify the application under test and focus: write a lot of tests that run against an individual system, they are much faster than coupled tests, they are also much cheaper because they won’t bother you with false positives. Remember, each system on its own can satisfy business acceptance criteria, it might take some time to formulate the acceptance tests describing business value but it is indeed possible. The business logic to be tested resides in the individual systems; test it where it is, not through another system.

Second – go under the hood:, unless you are testing specifically the user interface, write tests that are run against a system by using a service layer rather than the user interface itself. They are much faster and extremely more maintainable (not affected by UI changes). Go under the hood! Focus on the logic to be tested not the steps that are required to get to a certain state. Use mocks and stubs, invest in building such support tools tailored to your needs, they pay off oh yes they do.

Third – when integrating, focus on interfaces, write integration tests between pairs of systems; focus on testing the interfaces between the systems only, do not duplicate the testing you have already done on the individual systems. There is nothing worse than trying to test the business logic in SystemB by using SystemA, don’t do it! Test SystemB business logic in SystemB with fast tests as part of golden rule #1. Integrate SystemA and SystemB to verify that the communication between the 2 systems is not broken, test only that the communication works, do not test the functionality of either of the systems at this stage (you have already done it in rule#1).

Fourth – use your brain and do not duplicate slow process: if you can’t help it and you want to write E2E tests, limit drastically their breadth to cover the happiest paths you can think of, make sure you run them in a dedicated environment to avoid test data corruption and involuntary resource contention. Automating E2E testing in complex systems is a bad idea; use exploratory testing on new features instead.

Mike Cohen a few years back came up with a test automation pyramid that describes how the automation effort should be distributed. Unit tests represent the majority of tests, immediately after there are tests run through a Service layer and at the top we have only a few E2E tests run through the User Interface. I love Mike Cohen’s pyramid, thanks Mike!

Automation Test Pyramid

To recap, I have illustrated my rambling in a “Dummy Test Automation Strategy For A Simple Multi System Architecture”. I am interested in your feedback, good or bad, please go for it, tell me what you think!

SCRUM, from failure to success

This is a description of what made our previously failing SCRUM team a success within our organization. The lessons learned through failure were as important as the final success.

Before the “Revolution”: Our organization had been going through a transition to SCRUM for around one year. In the process of transitioning we had delivered a couple of software projects. Such projects had been seen as a failure by our Product Owner (PO from now on) for not delivering business value, and by the development team for failing to deliver a quality product.

The goal: The software development department needed a big success to justify continuing the transition to SCRUM and we were all determined to deliver a great product to our customers to demonstrate how much we had learnt from our own previous failures. The PO was extremely sceptical about continuing with SCRUM and didn’t refrain from showing their feelings.

The Plan: One SCRUM team was created to work on “Project Revolution”. Our goal was to deliver in a short period of time quality software that would exceed customer’s expectations and drive them back to embracing agile development.

The Focus: For the very first time we religiously adhered to SCRUM practices, we focused our efforts on Software Quality practices and built a solid relationship between the development team and the Product-Owner.

How we did it: We engaged with the PO from day zero and we tried to infect him with our enthusiasm for software development and quality. Before the start of the projects we gave a demonstration of our Quality plans and new software development approach: Acceptance Test Driven Development.  The PO showed interest in our approach beyond our expectations; he was blown away by the power of the tests ubiquitous language and clearly understood its potential value. He was also reassured by the demonstration of the Software Quality infrastructure we had built to harness the development of our application and took large interest in how acceptance tests were run and results reported. The development team discovered that something that seemed initially only a technical matter was very valuable to our PO.

SCRUM framework was followed rigorously by everybody including the PO that in previous projects was somehow cut away from the core of the SCRUM team. This time we really started to discover SCRUM’s real benefits of fast feedback and continuous improvement. The ATDD approach gave great benefits by allowing front-loading the discussions over incomplete/ambiguous requirements at acceptance test creation (the very start of development). We discovered quickly how front-loading such discussions would on one hand slow down development but on the other hand would allow us to develop only once and only what was really required rather than get to the final product by continuously fixing defects. Having a large bed of acceptance and unit tests gave the development team the confidence of refactoring freely and we were able to see the positive in the fast feedback of our builds.

Transparency: A full transparency policy was adopted, we were all part of one team, there were no secrets among us.

Collaboration brings success: Slowly the PO started gaining confidence in our work and when at the demos he started saying things like “This is a fantastic job guys!” or even “You’ve done in a 2 weeks sprint what in the past we were used to getting in 3 months and at a level of quality that is not even comparable”. It’s easy to understand that when our PO started trusting us, we were able to go even one step further and propose our alternative solutions to him. While in the past such solutions were categorically refused and a command and control approach was used by the PO, we were now at a stage where full collaboration was the norm and feedback was working both ways.

Fun: The product was delivered in time with excellent level of quality, the products’ business value exceeded the PO original expectations and best of all we all had great fun in developing it.

Project Revolution was an amazing experience.

 

Industry: Credit Information

Project Scope: Credit Information Management System

Technology: Java (Spring), Tomcat, HTML, jQuery, SOAP, Oracle ESB

Tools: JUnit, Cucumber, Selenium, Crucible, Sonar, Jenkins, Maven

How to avoid the very dangerous ALWAYS-GREEN test

When a test passes the first time it’s ever run, a developer’s reaction is “Great! Let’s move on!”. Well this can be a dangerous practice as I discovered one cold rainy day.

It was a cold rainy day (kind of common in Dublin), I was happy enough with my test results being all shiny green, when I decided to do some exploratory testing. To my surprise I discovered that an element on a web page that had always been there before, was gone, departed, vanished!

First reaction was to say, where the hell is it? I run some investigation and I saw the cause of it, no worries, it got knocked out by the last change, easy fix. The worst feeling had yet to come, in fact when I went to write a test for that scenario I saw that there was already an existing one checking for exactly that specific element existence… WHAT? The damn test had passed and was staring at me in its shiny green suit!

When we write automated tests be it a unit test, acceptance or any other type of test it is extremely important that we make it FAIL at least ONCE.

In fact, until you make a test FAIL, you will never know if the damn bastard passes because the code under test is correct or because the implementation of the test itself is wrong.

A test that never fails is worse than having no test at all because gives the false confidence that some code is tested and clean while it might be completely wrong now or any other cold rainy day in Dublin after a re-factor or a new push and we will never know because IT WILL NEVER FAIL.

If you don’t follow what I’m talking about, have a look at this example:

Take a Web app and say I want to verify that one field is visible in the UI at a certain stage.

What I do is to build automation that performs a series of actions and at the end I will verify whether I can see that field or not.

To do this I will create a method isFieldVisible() that returns true or false depending on whether the field is visible or not, so that I can assertTrue(isFieldVisible(myField));

When this test passes I am only half way there because I need to demonstrate that when the field is not visible isFieldVisible() does return false, otherwise my test might never fail

To do this I write a temporary extra step in the automation that hides the field and then run the same assertion again

assertTrue(isFieldVisible(myField));

At this point I expect the assertion to fail, if it doesn’t it means that I just wrote a very dangerous ALWAYS-GREEN test

What if I did write a very dangerous ALWAYS-GREEN test? What do I do now?

I must change the code (the test code, not the app under test code) until the test FAILS, when it fails for the first time and the original test is still green I can be sure that the test can fail and will fail in the future if after a re-factor or any other change that introduces regression, rainy day or not.

At this point you might argue that, rather than simply changing the test to make it fail and revert it to the original test, we should write the negative test and execute it as part of the automation.

It is an interesting point and the answer depends on the specific situation. In some cases a negative test can be as important as the original test and it is necessary for covering a different path in the code, but this is not always the case and we will have to make an informed call every time.

Example1 – When writing a negative test makes sense:

I want to verify that when I hit the “Customer Feedback link” my “Company Search box” can still be seen by the user.

I write the following test:

To make it fail I will add an extra temporary step:

If the original test was green then this test MUST FAIL (otherwise we have written the very dangerous ALWAYS-GREEN test)

At this point I notice that this is a valid scenario and I can write a test for it (if I don’t have it already).

The test will be

I have positive and negative scenarios covered. The negative scenario verifies that the “Hide Search Box link” functionality works as expected.

Example2 – When writing a negative test does not make sense:

I want to verify that after performing a search for a company and getting search results back, the value of the latest search is persisted in the search box.

I write the following test:

 

To make it fail I remove the second and third step

 

If the original test was green then this test MUST FAIL (otherwise we have written the very dangerous ALWAYS-GREEN test)

At this point I look a the tests and realise that there is no point in adding a negative test like the above (the one with 2 steps only) because if we don’t actually type ” Jackie Treehorn corp.” in the field, it is very unlikely that Jackie Treehorn or Jeff Lebowsky or any other cool character will suddenly appear in a search box magically so I decide that a negative test is not required

To recap:

1. When you write a test you MUST be able to make it fail to demonstrate its implementation is valid in particular if it is a cold rainy day.

2. If while you make the test fail you realise that this represents a new valid scenario to be tested then write the scenario and a separate test with the negative assertion, it might come useful on one hot sunny day.