How to Estimate How Long Manual Testing Will Take
"How long will this take to test?" is a dreaded but important question constantly asked when planning out releases and product updates. "How long will this take to manually test?" is an even scarier question. It seems almost impossible to answer if you do not have previous test suite runs to make an estimate against. Ideally, you would have very little manual validation on a product/project; however, there are situations where more is inescapable. To help with estimating the manual testing effort, I have created the following guide.
Step 1: Map Out the Product
The first step may seem pretty obvious, but it will definitely help in identifying the level of testing effort: "Map out the product." You need to know your product and understand how it works in order to get a good idea of just how much testing needs to take place. You also need to know the maturity of the product and exactly where you are in the release cycle.
Questions you need to answer in order to focus your testing intentions:
- What should your current testing be focused on? For example, the functionality that is changing in this release vs functionality that has not been changed.
- Where is the product in the development cycle?
- What is the desired roadmap for the project?
Manual testing will usually be done at the top level. Some things you may need to know in order to estimate testing efforts are:
- How the UI is supposed to look and work (visual and functional tests)
- How the product is supposed to run (performance tests)
- The capacity the product is supposed to be able to handle (stress tests)
- Possible security concerns that the product could have (security testing)
- Possible workflows specific to the product and how various bits of functionality interact with each other
Lower-level testing is usually automated because it's fairly simple to add to the product during development. Most likely understanding of these aspects of the product is not needed in order to estimate manual effort, but may help in understanding how the product works:
- The interaction of the various components of the product (integration tests, API tests)
- The various components of the product themselves (unit testing)
- The structure of the components of the product
Step 2: Plan the Testing
Once you have mapped out the product, it's time for you to start planning out the test cases. You have an idea of what the product is doing and how it may need to be tested, so put your ideas together into an action plan.
You need to understand the testing needs; this includes unit testing, integration tests, API tests, functional tests, visual tests, security tests, performance/stress tests, product level tests, and smoke tests. For each category, you need to decide if the testing will be manual, automated, or a mixture. This decision will depend upon your own product and your team's expertise.
In order to decide what may be automated, you will need to get an idea of the current test coverage and the amount of test coverage needed. Ask yourself how the automation will be built and if it would be simpler to manually test the task at hand. Get a list/plan put together for exactly what type of testing needs to be done and how much of it. For the automated side, you don't necessarily have to make an automation plan right away, simply knowing what can be covered by automation and pointing it out can help you.
Step 3: Estimate the Total Time to Test
For estimating manual testing effort, you need to get an idea of the scope of the project as well as the expertise of the man-power that will be driving the manual testing (those familiar with the product/process versus those completely new). While completely new people may be slower, they may be more diligent and notice issues that those familiar with the system may look past or explain away. Newer testers may take more time looking up issues to see if they're already reported, causing a larger time estimate. However, this can lead to discovering more issues that familiar testers might assume are already logged without looking them up.
From the information you have gathered in Step 1 and identifying what will be automated, you can create a test plan of attack. My suggestion would be to go through each feature that needs to be tested (including workflows and functionality that may be linked to other functionality) and assign an effort score of 1-10.
The following table outlines some of the functionality that would need to be tested in our PrizmDoc Editor product:
Once you have an effort score on everything, you can make an effort-to-hours table. This does not have to be exact and can be adjusted per functionality. This is because effort does not always correlate directly to hours, but can be helpful for a starting point. Below is an example of the effort-to-hours table:
Now you can map out the approximate time it will take to test each part of the product. As you can see in the table below, some of the effort-to-hours estimations are the same as the table made above, but some are smaller or larger based on personal knowledge of the product and how long such an effort could take. There is no secret formula to this. It's just a gut feeling that you can base off of current knowledge of the system, historical data (if you have data from previous testing efforts), and the estimated effort-to-hours table you created.
Finally, you can total up your approximations to get a grand estimation of person-hours it will take to manually test the product.
Note: As with all time estimations in software engineering, buffer time should be applied. What you have estimated up to this point is your "ScottyTime". What you want to communicate is your "KirkTime". KirkTime = (ScottyTime * 3). This will make sure that there is time for any road-bumps or extra exploration that may need to be done.
In summary, estimating testing effort is not the easiest task. Estimating time is variable in nature and can be difficult to get down to an exact science. The good news is that with this guide and collecting data on how long testing takes each time you test the same product, answering the dreaded question will become easier. You will no longer have to fear when someone asks, "How long will this take to test?"
Kyla Kolb, Software Development Engineer in Test (SDET) III, has been with Accusoft since November 2017. She is a graduate from the University of South Florida with a Bachelors of Science degree in Computer Science with a main focus on cyber security and mathematics. Though she is early in her career, she prides herself on advocating for quality, best testing practices, and test automation. She has spent her time at Accusoft with the PrizmDoc product with her current main role in PrizmDoc Editor. In her spare time, Kyla enjoys crafting, costuming, wood working, and weight lifting.