A big procession of Echternach

Three sprints ago we had to deliver a story. When that particular story has been finished, we could run some tests on a Native App on an IOS device with the help of Mobile Center and UFT. One of the acceptance criteria was that it should run on Jenkins without any problems. This is a normal acceptance criterion in an agile environment. For me personally, running the test on our build server should be in the definition of done. We want to automate as much as possible to have our feedback loop as short as possible.

The story

With that user story we investigate if it is possible to use mobile center. The question if mobile center is stable enough for IOS is answered after this story. Of course before I start to automate, I do the test that I want to automate manually. That succeeded.

Now it is time to start doing the same with UFT. Because for android some code already existed, it was very easy to adapt it to IOS. After some coding, it was possible to log in on an IOS device with an UFT test. Is looks very promising. The story is nearly finished.

Problems can occur

After several retries on the test machine, it is time to put the test in Jenkins. Jenkins will then start the test and show the results. The first test run was failing. What happened there? After logging in, there was a spinning wheel, that was there for ages. Because the timeout of the test was only 30 seconds, the build failed. After that, I retried it manually, and the login was going well and fast.

I retried the test again with Jenkins. This time the build succeeded. What is that? Something strange is going on. I discovered something after some investigation. If the applications is running and the login is performed again, in that case the Jenkins job will succeed. If the application is not started, then Jenkins has to start the application. In this case, the build fails because of the hanging login procedure.

We where thinking about fixes. What could be solutions?

  • Is it a problem with the application?
  • Is it a problem with Jenkins? Maybe the plugin for UFT/Mobile center?
  • Is it an IOS only problem?
  • Some timing problems?

How to solve the problem

Also on android?

Let’s try the same test on android. Is that failing as well on Jenkins? I changed the device and launched the job on Jenkins.  After a while, it was clear to me. The test will go green. So the problem is not present on android devices. That narrowed the search. And I could look only at IOS from now onwards.

Was it a timing problem?

I placed some sleeps after the clicking of the buttons of the login. But that did not solve the problem. It was always the same result, a failing build job. I abandoned this idea.

Jenkins plugin problem?

Maybe it was a problem of the HPE plugin in Jenkins? How can I see if the plugin is the problem? Write our own Visual Basic script that starts the test. After some time, the script was stable enough to launch the test. Let’s try it on our build server. After a big change, the first press on the build link in Jenkins is always exiting.

Is it going well? Is it failing? After some minutes, the result of the test was visible. The build was again not green. Some retries confirmed this. The build never gets green when the application was not running before. Just like before. The problem is still not fixed.

Do other Apps have the same problem?

Maybe it was an application problem. So I tried to automate another Native Application. The first application I tried was an application where I only needed to click on. Logging in was not possible. This time, the Jenkins job was passing. For me that shows that there is not really a problem with the infrastructure. Is must be the application.

After this application, I tried another application. In this application it is possible to log in. Again, I tried to log in manually. This succeeded.  I created a script that could log in with HP UFT and Mobile Center. The next step now is running this test on the build server.

I was confident now that the test job will succeed now. Because it is the app that has a problem, no? Again, I was wrong. The first time the build was already failing with exact the same result as with the original application.

Really, a solution?

I start changing the timeout of a function check_label_exist. This function returns True or False if a label is present on the screen. Is has a parameter, a timeout. If after the timeout the item is not present, the function will return False.

I noticed something. I tried a timeout of 5 seconds. The test on Jenkins failed. After 7 seconds, the login succeeded. But because 7 is greater than 5, the test failed. So I increased the timeout to 10 seconds. Now the login succeeded after 15 seconds. Is this coincidence ?

I tried more timeouts, and saw always the same result. The login is finished after the timeout.

At that time it was time for our standup. I explained this very strange behaviour. Some ideas came out the heads of the team members. One of them was to create our own loop. Maybe that is worth a try.

The hours after the standup, I created an extra loop around the check_label_exist. This time, our build server shows a better color as result of the test run. The test was Green!! I was hopeful. I retried it again. Again a build that is successful. Each time I tried, the build was green.

After refactoring the code the build still was succeeding. The loop is now inside the check_label_exist function. So now I am very happy. The build is still green after 11 retries. The solution is found and our story is finally finished.

Testers can learn from the Hunger Games

Recently I read the first book of The Hunger Games Trilogy written by Suzanne Collins. The book is also very interesting if you think a little bit deeper about it. What can we as software testers learn from the Hunger Games?

1. Explore the environment.

When Katniss, the main character in the book enters the virtual world, she did not know how it looked like. Every year the games take place in a new virtual world. What does it mean for her? She needs to know the world. Where is the water supply, where can she find food. And so on. How does she find it? By exploring that new world.
In the world of software testers, this is the same. Every new product, or even every new build, some new features can be found. The tester needs to explore that new build. How does it work? What are the mistakes? If I do this, what happens then next in the system?
Like Katniss, the tester also needs to remember what the system did on certain places. Sometimes buttons can appear and disappear, depending on the state of the program. Search for states, search for little changes that may not happen.

2. Keep living (testing)

To win the game, Katniss should listen to one advice. Just keep living. Do not let other persons kill you.

For us, software testers, this is also the case. We have to win. But when do we win? When our customer(s) is (are) satisfied with the quality of our software product we provide.

For us, this means that we need to test the product until quality is good enough. Good enough to win the trust of our customers in the product. Customers who are not happy will not return in the future.

3. Challenge the rules.

The rules of the Hunger Games are very simple. Kill each other and the last one that survives will win the game. In the very end, only two players where alive. Two from the same district. Katniss had the poison berries and gave a few berries to Peeta. They wanted to eat them together. If they did, then there should be no winner that year. At the very last moment, suddenly there was a voice yelling that this year, there are two winners. It means they defeated the rules.

Is this not like we always do? We challenge the system to see if the rules are fulfilled. Rules are in our case, the requirements. We try to get around requirements to see if there are mistakes in the requirements itself. The system can maybe put in such a state, that the system violates some requirements.

4. Rules can change.

Suddenly there was a voice. The rules of the game changes. Now two persons can win the game if they are from the same district. Katniss was happy about that. Now she does not have to kill Peeta. So this means that the strategy of Katniss is now completely different. She can look after Peeta and together they can try to survive. But in the end the rules of the game changes again. This means again a strategy change for the players.

This looks a lot like we work with our agile team. The requirements can change very fast. This mean that also the testing strategy can change and also the testing itself. What yesterday looked like a bug is now no bug anymore. Or maybe a new one appears.

We as testers in agile teams need to adapt to changes. Try to do it, then you will survive.

How I setup our test documentation

What are you going to test for this story?

That was a question that I ask myself a lot. This is a question that needs to be answered for every user story that I test.

We do have a lot of automated tests. Tests that are written in python. It are tests on API level, so without interaction of the user interface.  I also have to test manually. Not every part of our huge system can be automated. For manual testing, I created some test documentation. That documentation contains what will be tested. I do not just write in detail what I look for, but in general. There should be room for doing crazy stuff while testing too.

How should the system react? What should the software do? These are some of the questions that are answered in the testing documentation.


The first version of this test documentation was a document written in word. It worked, because I was the only one who edited the document. When the company grew, this was also not suitable anymore. I and my colleagues can not work on that document at the same time. That is the reason that I looked at a new format for my test documentation.

What did I want?

  • It must be simple to edit.
  • No code language like html or so. Preferable in markdown or another simple format.
  • Different users should be able to edit the same document at the same time.
  • It should be placed in version control, so that we can merge it, like other code.

The new documentation

Sphinx is a tool that makes it easy to create intelligent and beautiful documentation, written by Georg Brandl and licensed under the BSD license.

Our testing code was already in python. Python has some nice documentation. Why not use the same documentation framework as the python community uses? That could be a good alternative. I found the Sphinx project. This project seems to be what I wanted. On the main page the first sentence was promising. It talks about easy to create and beautiful documentation.

The first thing I did was installing Sphinx. So I entered:

That was an easy one. The next one was in an empty directory, I typed the command

That is also explained in the First Steps With Sphinx Tutorial.
This sphinx-quickstart program asked me a lot of questions. I answered them one by one. One of the last question was if it should generate a make file. I answered yes. Now I do not need to remember some large command like:

Now the program has generated some files, that can be placed in version control. Building the documentation is very simple:

Test documentation itself needs to be in reStructuredText format. It is also very easy to learn. It is nearly the same as markdown. More information on reStructuredText can be found on their webpage.

Because now the complete test documentation is in text files, it is possible to store it in version control. We use git in our company, so that infrastructure can be re-used. I also make use of our jenkins build server. This build server generates the documentation for us after each check-in. And that documentation is afterwards copied to a web server. Now everybody can have a look at what is tested at any time.

It is a system that is easy to use now, and it is transparent for everybody. Isn’t that what I wanted?

Bugs should be stories too!

The application we are building is not so fast anymore. So I entered an issue in our bug tracking database. After a while the bug is prioritised by our product owner and later in time the bug is placed on scrum board. Good practice you say. Nobody likes known bugs, they should be solved as fast as possible.
The fix for this bug was not that difficult. At least for the programming side. It just was to make some calls asynchronous and the complete program behaves faster. There was also some little impact in testing, because only a few interface functions where changed.
While fixing the tests, we discovered that everything was impacted. Our system behaved completely different as before. It was not wrong, but our test system needed to update.
Why is fixing the test code taking so long? It is not that difficult? No?
No, it is not difficult to fix the tests, but it is a lot of work. So the planning for this task is not correct.
What do we do if a bug gets on our board. We never refine bugs in a refinement meeting like the other stories. We only put them on our board. Look if there seem to be a lot of fixing time. Then we put 2 story points on it. If there is nearly no fixing time, it gets 1 story point.
I am not a fan of this approach. I do want to see the bugs, if it are not quick fixes, in a refinement meeting. They can have a lot of impact. Maybe the impact is only in the implementation. Maybe the effort is in our regression tests.
In my point of view, a bug is just like a user story. So we should threat them as stories too.

Demonstrating incomplete work sucks

The current sprint is ended. Our team hold the sprint review. At the sprint review, we review the work that was completed and the planned work that was not completed to the stakeholders.

The last weeks, our reviews are a little bit hectic. This time, things where not different. I tested yesterday some features and they seem to work, with some minor bugs in it. If I was a stakeholder, I could live with those minor issues. So for me it was simple. We could close some stories and enter extra bugs in our database and go on.

This is for me a better scenario than what we encounter in our review meeting. The demonstration was not done on the version I took while testing, but a next version. “Because some more bugs are fixed”. That is fine for me. I really like that more bugs are fixed.

Unfortunately the new changes introduced some other things that where broken. We did not automate everything in our build pipeline. For example, we do not have automatic user interface tests. If something breaks the user interface, it is not detected until it is seen with manual testing. There are other parts of our system that needs manual testing too. The main reason they are not yet automated is a lack of time.

You could now ask to me: “Why did you allow that the new version is taken for demo purposes?” That is because I am one tester in a team. But we do have 3 teams that are working on the same product. All three teams also do have a review on the same moment on the same machine. So I do also have to trust the other teams that they tested the changes they made. I do have to trust them that they did not break anything.

But they did break some things. At this moment I am a little bit afraid. Afraid that changes that a team makes, brakes some features of another team. We have 3 teams. Each team has one tester that helps the team. But as tester, we help the team to test stories. We do not have sufficient time to check everything on lets call it epic level. At this moment, I am thinking that the testers do not belong in the team. At this moment, I think that the testers should test more on epic level.

We have a scaling problem. How do we scale scrum to more teams? All interesting questions we should solve one by one. I already read about scrum of scrums, but this does not say how testing the integration points. And more, how can the testers help?

How do other teams tackle this kind of problem?

Dear team, we have a problem.

I noticed a few days that nobody of our team is watching to the outcome of the automated integration tests. Our tests run on an integration server. Since the beginning of our sprint, one test is failing. The result is that the build was not a single day in our sprint green.

failingThe first day of our sprint, one of our tests broke. The test was already running for several months without failing. One of the stories had impact on code that could break this test. In normal circumstances, the fix is there within a few hours. That is fine. Green tests give confidence.
This time, things are different. The test is failing until the story that has impact on it, is implemented. But still today, the day before the review meeting, the test is still failing. And indeed, the story is not finished yet.

This is a problem for me. More teams are working on the same repository. The other teams also rely on the integration tests. Their safety net is gone already for at least 2 weeks. Of course they can have a look at the status of the testing job and see that only that test is failing. But because it is taking such a long time now, nobody looks at it at this moment.

How can we solve this problem?

  • Disable the test?
  • Ignore the test result until it is fixed?
  • Mark the test as unstable until it is fixed?

What option is chosen is not important. Only, ignoring the test results as it is today is a risk. If something else fails, nobody will notice it. And that is just not what we want.

So next retrospective this problem will be on the table.

Forgotten tests

Time is passing by. So are our software versions. The developers in my team are creating a lot of code. That is good. We want a good complete product that works, don’t we? The team is an agile team. The lonely tester in that team is me. Agile you say? This means that the team tests every story?

I can not answer the question with a simple yes or no. It is a little bit complicated. The team is trying to test all stories. Still some gaps are missing. I try to explain what I mean.

In the beginning of the project we were a small team with 5 developers and 1 tester. We started from nothing. That means that we created a continuous integration server. We also created a test framework. In that framework we can test our running back-end. The back-end communicates with hardware. For this out team created a simulator, to simulate the hardware. There is also a user interface that interacts via an API with that back-end. The tests in this framework are using that API. We call the tests our integration tests. The name does not matter.

Then a few months later, the team expanded. Then we where with more than 12 team members. The conclusion is simple at first. Create 2 teams out of one big team. I already saw a problem with the current situation. There is only one tester. The other team had no tester. We saw it not as a huge problem because the teams are responsible for quality and not only 1 person.

That is the theory at least. In practice it is more difficult. One of the problems we are now dealing with is the fact that some stories have impact on the other team. Our team is for the moment blocked on the other team. Not blocked on creating new code, but blocked on testing. Why are we blocked? Because we focus on stories and not on testing the complete system. It should be, but it is at this moment becoming a huge work to test all. We have to make choices and test only subsets. I think that this problem also occurs if we are in one team, so it was only a matter of time when this occurs. How should we deal with system testing? The testing of the complete system?

In waterfall that part is easy. We create releases and test on that release. Now we do have a release after each story. We just can not test everything every story. This means that we forget those “release” tests in the pile of work we have. That is our problem at the moment. We still have no real releases.

How do you deal with this problem?


Brainstorming, it is a very common thing that is used in agile teams. I never felt very comfortable in that process. I think I know why now.

Brainstorming, it was an idea of Alex Osborn. He found that his employees did have fantastic ideas at that moment, but they where not creative enough. And if they had fantastic and creative ideas, they did not tell it to the colleagues. Why? Because they where afraid for criticism. That is why he kind of invented the brainstorming process. He was looking for rules that could help give people freedom of mind to reveal new ideas. The four rules are:

  • No criticism of ideas
  • Wilder ideas are better, try to exaggerate
  • Go for larger quantities of ideas, not for quality
  • Build on each others ideas

His idea was that if a group of people follows these rules, they create a lot of great ideas, more and better than if you do it separately. This process became after a lot of years more or less common practice and also in agile brainstorming is used a lot. Rooms filled with pictures on whiteboards or paper, who does not have seen that yet?

But is it like this? Is brainstorming better? In 1963, Marvin Dunnette published a scientific paper that proved the opposite.

He let 24 groups of 4 people brainstorm on several problems. The groups generated a lot of ideas. Afterwards, the participants had to brainstorm on similar problems on their one. To simulate that they also form a group, the ideas of the original group members where added together. So he could compare it. The results where amazing. The individuals produced more ideas than they did if they worked in group. And they did not only generate more ideas, the quality was similar. From 24 groups, 23 groups produced a larger number of different ideas if they worked alone.

Later in history, this experiment was confirmed by other experiments. So it is very curious that brainstorming is still that popular. Brainstorming does not improve creativity. It does not generate the fantastic ideas. It has however his good parts. It improves the social cohesion in the group, so do not throw brainstorming away, it can still be useful. The experiment of Marvin Dunnette did also mention that if first a brainstorming in group is done, and afterwards another individually, that the outcome is much higher.

Brainstorming can also be creative if it is done in another way. If it is done online, for example via a chat group or via mail. So next time if we do want some great ideas, try to do it in a slack chat room. But it is up to your team if you use such chat rooms or not.

How to write a fantastic bug report

Today I found a question in my mailbox.

Bart, I am solving a bug report. According to developer X (who created the bug report) you said that ZZZ happen. How can I reproduce this?

A normal question, no? In this particular case it is not.

Last week I got some feedback from one of our stakeholders. I was testing a feature that was failing. We call the feature that was failing feature simulation in our application. We also have criterion simulation and criterion trigger. The stakeholder told me that criterion trigger did not work. I still needed to investigate it. I believe in facts, and not what some stakeholder tells me that happens.
Later that day, I told our scrum master what I was doing and what I discovered so far. I told him that I discovered that the simulation was not working anymore. I also told him that one of our stakeholder had an issue with triggering.

Then it was time to leave for me. The weekend passes. On Monday I looked at our electronic scrum board. I saw that someone added a new story was to our scrum board! Our sprint ends in two days. We did not finish all our planned stories yet. How is this possible? What has happened?

The new story is a bug report. The bug report is a strange one. It does not contain any description. It only contains a title.

What do I write in a bug report?

  • A good title:
    • The title summarizes the problem as best as possible. Because it it often put in bug lists, it is important to be as specific as possible. But also not too long.
  • A description:
    • The description describes the problem as it happens. I always place steps to reproduce the problem in a description. When reading those steps, the developers should be able to reproduce the problem. This is why I never create a new report on rumors. I also attach a picture or a little screencast to the bug report. Our bug reports have some meta data. There are fields for severity or the status of that bug. What is lacking is a field to fill in a software version. The software version is also present in the description of the bug report

The product owner decides afterwards what the priority of this new bug will be. He talks to know the impact to several people about this new bug.

Remember that a bug report is to help the developers to fix a problem. Developers can fix problems faster if the description of that problem is better described.

Share a testing blog post

Sharing a blog post about testing on day 4. It took me a little bit of time, but I found a blog post that I can share with my team. Not just a post, but one that I think we are suffering to.

One of our problems the latest time is that suddenly all stories are done. Mostly one or two days before sprint ends. This because nearly all stories are picked up by our developers. I think that at this moment, it is because we have a very huge team. In a few weeks we are going to split them up in 3 teams, so we are going to have new challenges then. But at this moment, we are slowing down because most of the stories are implemented just in time. And then the testing still needs to be done.

blogpostA few weeks ago, I found a blog post from , about The Squeezed Testing problem, is that not the problem we have in our team? I do think so.

I posted the link of The Squeezed Testing problem, a blog post from , to our teams slack channel. In a few days or weeks, I can have replies from it, because of the holiday season, there are not that many team members in the office in July.