Thursday, October 10, 2019

Allowing developers to fully own quality

As an engineering manager of two sprint teams that had dedicated quality engineers who do all testing and certification (reporting in to a different quality engineering manager), I recently tried something different - I took the quality engineers out of the two teams and placed quality entirely in the hands of the software engineers. 7 months later, the teams are running great! Levels of quality have been maintained, output of the team has stayed the same, amount of automated test code written has by the team has increased, and motivation and engagement on the team has increased. I'd like to share our story of how and why this worked.

Some of you may read that first paragraph and think "Dedicated quality engineers on a team? What do you mean?" or "Quality wasn't fully in the hands of developers before?".  I would have thought the same thing before I joined a company that ran this way. If this is what you're thinking - this post is not for you. You won't be surprised by anything posted here. This blog post is for people on sprint teams where there are dedicated quality engineers who test and certify all work.
 

Prior process


So let me explain how things had previously been.  We do product development in sprint teams. Within the teams, there are typically 4 to 7 software engineers, and 2 to 3 quality engineers.  Within a sprint (typically two weeks), the software engineers write code for user stories, deploy the code to a testing region, and then hand the user story off to a quality engineer to test and certify. The quality engineer will test it, try to break it and find issues (typically by manual testing), perform regression testing (mostly manual due to lack of thorough automation), file bugs for the developer to fix, and back and forth until they have tested everything and any bugs were fixed. Then they certify the user story.  Automated tests are sometimes written by quality engineers in the sprint, sometimes written in later sprints, and sometimes not written at all. Software engineers will typically write their own suite of API level tests for user stories that have API end points, but not always. The work is mostly done on applications that have lived for years, and do not have very reliable automated tests.

New process


The new process that my two sprint teams are following is similar, but all of the work is done by software engineers. These two teams now just 4 or 5 software engineers, and no quality engineers. The software engineers write code for their user stories including all necessary automated tests - unit, API, UI, whatever makes sense. They will then deploy the code to a testing region, and another software engineer (whoever has the most availability that day) will then do any necessary manual testing to certify the user story, similar to what quality engineers would do before.  So... really not too different than before, just without the quality engineers.  There's a little more to it than this around how we plan out what needs to be manually tested, but, I won't go in to all of those details.

My conclusions

After doing this and seeing the great results, I am really convinced that having quality engineers on sprint teams that automate, test, and certify developers code just does not make sense in the vast majority of cases. This is a relic of waterfall, but, doesn't make sense with scrum. There may be other productive roles for quality engineers, possibly even on a sprint team, but that role should not be to test and certify all of the work coming from developers.

So, why is that my conclusion? And what do I have to share for how to execute a change like this?  You can read the why, or skip straight to my recommendations for how to make this work.

Why does having developers test and certify make sense?


Putting quality entirely in the hands of the software engineers leads to many positive benefits for developers, and brings much better efficiency on to the team. Here are many positive factors why putting quality entirely in the hands of software engineers in sprints makes sense.

Better overall team cohesion

 

When everyone on the team has a similar role of a developer, there is a better sense of cohesion on the team. Here are some factors why the team cohesion is better:

Removing the wall between the coder and tester for work

 

When there are two different groups, one group responsible for developing and another for testing, a wall exists. There is a "toss it over the wall" mentality when tasks move in to test. Developers push code out without much consideration of testers time and availability, and consider their part done when they've tossed it over the wall. Developers know testers are there and will catch their bugs, and don't consider enough that testers are not perfect and don't consider the time and rework to fix bugs. Removing that and having developers test and certify each others work as regular practice removes this wall, because the people they are giving their work to to test are also other developers that have their own development work to do.

Unified direction for all engineers on the team

 

When all engineers on the team report in to the same group, there is a more unified direction for the team. There are no longer two different groups on the team with competing priorities from their management. Tough choices that teams need to make, like sticking to a definition of done and rolling over user stories when automation is not done, are easier for the team to make and the manager to assist with.  Compromises/sacrifices that affect quality and/or automation are easier to make with everyone on the same group.

Better shared understanding of the work being done

 

Developers who write application code all have some base technical skills and understandings that quality engineers will not necessarily have - especially in the active area that the team is working on. This means that discussions among developers over details of user stories tend to go smoother and quicker, and less explanations are needed. This will be discussed more in the section on efficiency gains.

More sharing of technical details of implementation and testing strategies

 

Sharing of technical designs with the whole team, and test plans with the whole team, leads to better designs and better test plans, and enables the developer to understand the testing strategy for their tasks. This can still be done with dedicated quality engineers on the team, but there is much less friction when the sharing is done between software engineers - there is a base technical knowledge that software engineers will all have from building the application code that quality engineers who do not build application code will not have.


More efficiency

 

When developers do testing and certification, there are many efficiency gains.

Reduced time spent manual testing

 

The data was very clear that throughout the 7 months, there was dramatically less time spent performing manual testing on both teams compared to before when there were quality engineers, yet the same level (or better) or quality was maintained. This is a huge efficiency gain. I feel this is because developers have a better base understanding of the technical details of the product and are better able to streamline their manual testing to only what is necessary. A common complaint from the software engineers from when quality engineers were on the team, was that quality engineers were testing areas that were not necessary, and developers had to spend time explaining why it's not necessary, or why bugs filed weren't relevant. This inefficiency was really removed with developers doing the testing/certification.

Less time explaining technical details

 

A common complaint from developers with dedicated quality engineers who do not write application code is having to explain technical details of work to quality engineers. But when the testers are other developers, there is much less time needed to explain technical details of their work, since the other developers are also writing application code and have a base technical understanding of the code and product. Less time explaining = more efficiency. A counter argument to this is that the separation is better to ensure better quality, but we did not see this as an issue.

Better utilization of everyone's time

 

Teams are rarely able to continually deliver functionality to testing throughout the sprint at regular intervals. Work tends to get delivered to test later in the sprint. If there are dedicated testers on the team, this leaves time early in the sprint where these testers are not fully utilized. Sometimes testers will work on automation backlog in this time, but when certifying developers work is their top priority it's hard to focus on and be effective at this automation backlog, since developers work can drop on them at any time. With developers certifying each others work, they are fully occupied with their own application code development work in this early sprint low testing time.

More flexibility in sprints

 

When anyone on the team can test and certify a user story, there is much more flexibility. Several big user stories getting delivered to test late in the sprint is no longer as big of an issue since there are many more people who can test. A single tester taking time off., whether planned or unplanned, no longer causes big disruptions.


More and better automated tests

 

With developers writing automation, more gets written overall, and what is written is more strategic and more efficient to write.

Delivering automation in sprint with application code

 

Having a separate group work on automation after the application coding is complete makes it very difficult to deliver automation for new work in sprint. There is just not enough time at the end of the sprint for this. So, either the automation for work is rolled over as a separate user story, or automation is abandoned for the new work, neither of which are good. With developers writing automation for their own work, it's a lot easier to deliver automation in sprint. Developers are able to write this automation along side the application code, and can modify the application code as needed themselves to support automation instead of having to coordinate with others. This also helps developers write better application code. There is much less overhead and inefficiencies having automation be completed with application functionality in the same sprint.

Better and more strategic automation

 

Developers gain more skills by writing both API and UI automation themselves, and think more about how the features they are developing can be written to allow easier test automation. Better and more strategic automated tests get written that have higher value because of the increased collaboration and discussion among developers of what automation will be written, and developers reviewing each others automation code. Since automated tests are software, and pretty complex and hard software at that, developers think of different and innovative ways to test that may not be thought of if this is purely the responsibility of quality engineers who do not write application code.

Less overlap between different levels of automation

 

With the developer writing all automation for their user story - unit, integration, and end to end/UI, there shouldn't be much overlap in the test code. Integration will test what unit cannot, and UI will test the end to end which can't be tested through integration. The developer will also follow testing best practices and put as much in the unit test level as they can - if integration, or UI, tests aren't needed, it won't get written!  This is more efficient than when there is a separate quality engineer writing this automation - there will inevitably be overlap between this automation with two different people. If you have a really good quality engineer automating a lot - they may be going overboard and overlapping with unit tests, or putting a lot in slower integration or UI tests that should be in unit tests.


Increased ownership and engagement from developers

 

With developers assuming testing and certification responsibilities, they will feel more ownership of what they are building, which leads to higher motivation and engagement.

Developers feeling more ownership of quality

 

When quality engineers own quality, developers naturally do not consider quality as much as they should. Once developers are testing and certifying others functionality, they start feeling more ownership of quality in their own work, because they get to experience testing and certification for others work. So, when developers push things to in test, there are less issues overall that testers find because the developers had been thinking about quality all along - both while coding, and in their local testing. This leads to less overhead of development issues being filed, and higher overall quality.

Developers having more knowledge of what the entire team is working on

 

By testing/certifying others work, developers get directly exposed to, on average, twice as much functionality the team is building than they would if they were "just" developing user stories. Because a developer can test anyone else's code, they're going to want to know and understand everything the rest of the team is doing. There are many side benefits to this - developers feel more engaged on the team, are able to help more with others work, provide more input/suggestions for designs and test plans, and for newer developers, get to learn a lot more about the product. All of these factors lead to a healthier and more motivated team.




Recommendations for best chance of success


I'd like to share some lessons we learned on the teams, and practices which I felt ensured that this was a successful change.

Maintain strict separation of responsibilities between the coder and tester of each piece of work

 

There are endless benefits to having a different person certify a developers work, and endless downsides to not having someone else take a look at a developers work. With developers testing and certifying it could be tempting to have a developer certify their own work. Do not allow this to happen unless the whole team discusses as a group and agrees that no certification beyond automation passing is needed. Quality will definitely suffer and many of the gains from having developers test and certify will not happen if developers are certifying their own work

Review test plans with whole team prior to development starting

 

Ensure that test plans are written prior to development starting on a task, and that the whole team reviews these plans. This ensures that the best and most efficient test plans are made, and allows the whole team to feel ownership. This also helps ensure flexibility for who tests a story - if the whole team participated in the test plan review, the whole team will have some knowledge on the work. And it ensures that the developer knows what the test plan is - this will catch potential bugs and rework before they even start coding!

Anyone on the team should be able to test work (other than the developer for their own work)

 

When anyone on the team can test and certify a user story, work is a lot easier to get to done. In standup - whoever is available can test something rather than waiting for a single person to be available. This flexibility opens up many possibilities that are hard when there are a limited number of testers - several large and small user stories can come in to test at the end of the sprint and still be able to be tested.


All developers should spend a similar amount of time on testing/certification

 

It would be very natural to have the developers that know the product the best do the bulk of the testing and certification. And it would be natural to allow the developers who are more interested in testing, or more willing to do the testing work that no one else wants to do, to take the bulk of the testing. Don't let this happen. Many of the gains will not be realized if only certain people are doing the bulk of the testing. This may require a heavy handed, top down approach from leadership on the team to force the work to be distributed - but this must be done or else you'll lose many of the efficiency gains. Not to mention, you don't want any individual devs spending too much time doing manual testing, or they'll slowly transition to a quality engineer as their primary function. Having everyone do testing, even those who don't know the product too well (provided they pair up with or get guidance from someone who knows the product better), or don't want to, is a great way to spread knowledge among the team!

Developers should discuss and plan what automated tests should get written for each story

 

Developers will have different ideas on what automation should be - what level of depth does each type do, and sometimes even whether automation makes sense.  This should be worked out for each story/task prior to development starting, so the whole team (including manager) can settle on what makes sense. I'd recommend doing this as part of technical design.

Leadership needs to be a strong voice of quality on the team

 

Some developers will adapt very well to these new responsibilities, but some will not and will need help to fully embrace the quality role. The rest of the team needs to be voices of quality here, but in particular, lead developers and the manager need to be strong voices of quality on the team.  Stick to principles - make sure test plans are sufficient in test plan review, make sure valuable automation gets written for all developers work on the team. While the whole team owns quality - leadership needs to ensure that enough quality aspects are considered in test plans and automation plans/execution. Some teams might be able to do this without a top heavy approach, but at the end of the day the leadership needs to ensure quality is adequately being planned for.

Consider edge cases and regression scenarios in test plans

 

Developers most likely will not have trouble adapting to testing the main scenarios for others work - after all developers have always tested this in their own work. However they may not have much experience thinking through possible edge cases or regression scenarios. Ensure these are included in test plans - make sure the whole team is throwing ideas out in test case review for edge cases and regression areas to cover. If the test plan writer, or the entire dev team, doesn't know enough to be able to determine areas to test for regression issues - seek out those who do (devs on other teams, PMs, app support, etc) before assuming regression testing areas are not necessary.

Be smart on the level of testing done

 

You want to ensure that the team maintains high quality - but be smart and efficient about the level of manual testing done. Avoid duplicate testing of areas within the sprint by grouping testing together where it makes sense.  The team should come up with a strategy for repetitive testing (like cross browser testing, and localized content testing) that balances quality with efficiency. This repetitive testing is probably not necessary for every individual user story.

Involve product management to look at user stories

 

It helps to have product management take a look at work during the sprint. They are another set of eyes that can catch overall scenarios, and look and feel issues that may get missed. We didn't have a strict process around this but it might make sense to put some process around this. Regardless of whether devs or quality engineers test I think this is a good idea to ensure that what the team is building is what product management envisions.


 Invest in good automated tests

 

Good automation - that has long term value to ensure quality years later, does not have intermittent failures, and can be a source of learning for future team members, takes time to make. Make this investment. Developers who have not written much API and/or UI automation may not automatically embrace this mindset and may write quick and "just good enough" automation. As a team ensure that the right level of automation is written for all work. This will most likely lead to some tough choices - having to roll over stories because not enough automation was written, adding prerequisite work to get some automation foundations in place, or estimates for some user stories to go up. These tough choices should be made though and not shied away from. The earlier you make the investment and stick to it, the more automatic this will become for all team members.

Learn from quality issues

 

If there are quality issues on the team - look at it as a huge learning opportunity for everyone on the team. Ensure that root causes are analyzed - what could have prevented the escaped issues? Are there safeguards that should be there? Is there enough sharing of test plans? Is the full team engaged in test plan review? Everyone on the team should participate in this, including product management, scrum master, and dev manager. Since developers are doing testing and certification, they will be more likely to participate in the discussion than they would be if there is a separate testing group.

Utilize standup to plan for the day to minimize context switching

 

Developers having to test others work adds more things that developers need to do. To minimize context switching, it's best to plan the day out in standup - have the team talk through who will test what. Unless it's the last day or two of the sprint, or there is an urgent production issue to get tested, don't have someone drop what they're doing the moment that a user story goes in to test.

Allow the team time to adapt to these new responsibilities

 

Don't go in from day 1 of a big change like this assuming that velocity and quality will be the same. It will take some time for the team to adapt - every team will be different for how long they need. So for the first few sprints, be conservative in sprint commitments and pull more work in if the testing goes faster than expected and the team adapts quickly.

No comments: