Demonstrating incomplete work sucks

The current sprint is ended. Our team hold the sprint review. At the sprint review, we review the work that was completed and the planned work that was not completed to the stakeholders.

The last weeks, our reviews are a little bit hectic. This time, things where not different. I tested yesterday some features and they seem to work, with some minor bugs in it. If I was a stakeholder, I could live with those minor issues. So for me it was simple. We could close some stories and enter extra bugs in our database and go on.

This is for me a better scenario than what we encounter in our review meeting. The demonstration was not done on the version I took while testing, but a next version. “Because some more bugs are fixed”. That is fine for me. I really like that more bugs are fixed.

Unfortunately the new changes introduced some other things that where broken. We did not automate everything in our build pipeline. For example, we do not have automatic user interface tests. If something breaks the user interface, it is not detected until it is seen with manual testing. There are other parts of our system that needs manual testing too. The main reason they are not yet automated is a lack of time.

You could now ask to me: “Why did you allow that the new version is taken for demo purposes?” That is because I am one tester in a team. But we do have 3 teams that are working on the same product. All three teams also do have a review on the same moment on the same machine. So I do also have to trust the other teams that they tested the changes they made. I do have to trust them that they did not break anything.

But they did break some things. At this moment I am a little bit afraid. Afraid that changes that a team makes, brakes some features of another team. We have 3 teams. Each team has one tester that helps the team. But as tester, we help the team to test stories. We do not have sufficient time to check everything on lets call it epic level. At this moment, I am thinking that the testers do not belong in the team. At this moment, I think that the testers should test more on epic level.

We have a scaling problem. How do we scale scrum to more teams? All interesting questions we should solve one by one. I already read about scrum of scrums, but this does not say how testing the integration points. And more, how can the testers help?

How do other teams tackle this kind of problem?

Dear team, we have a problem.

I noticed a few days that nobody of our team is watching to the outcome of the automated integration tests. Our tests run on an integration server. Since the beginning of our sprint, one test is failing. The result is that the build was not a single day in our sprint green.

failingThe first day of our sprint, one of our tests broke. The test was already running for several months without failing. One of the stories had impact on code that could break this test. In normal circumstances, the fix is there within a few hours. That is fine. Green tests give confidence.
This time, things are different. The test is failing until the story that has impact on it, is implemented. But still today, the day before the review meeting, the test is still failing. And indeed, the story is not finished yet.

This is a problem for me. More teams are working on the same repository. The other teams also rely on the integration tests. Their safety net is gone already for at least 2 weeks. Of course they can have a look at the status of the testing job and see that only that test is failing. But because it is taking such a long time now, nobody looks at it at this moment.

How can we solve this problem?

  • Disable the test?
  • Ignore the test result until it is fixed?
  • Mark the test as unstable until it is fixed?

What option is chosen is not important. Only, ignoring the test results as it is today is a risk. If something else fails, nobody will notice it. And that is just not what we want.

So next retrospective this problem will be on the table.