When To Stop Testing

When to Stop Testing – Exit Criteria

The phases of the Software Testing Lifecycle (STLC) start and end at different times in the project. Some testing activities run parallel with the development activity, and some rely on the development completion.

Team member knows when it’s their time to start testing, but the difficult question is knowing when to stop testing. There is no straightforward formula that designates when testing should end. And at the same time, testing cannot continue forever, so how do we know when it’s time to stop?

Do we stop after finding all bugs?

In reality, we cannot make software defect-free. Still, we aim to achieve the best quality software possible that satisfies the requirement of its end users. Testing begins with the intention of creating better software, so does that mean we should continue testing until we have identified all bugs?

Before we attempt to answer, let’s try this; can we know how many bugs exist in a system? No one ever knows this. Even after releasing software, there’s no guarantee that it is bug-free. During testing, every area and user aspect of the software undergoes a thorough examination that builds confidence in the system. Good test coverage is what matters the most.

Does the Count of Bugs matter?

The core objective of software testing is to discover bugs. Bugs play a critical role in software testing; every failed test case has an associated bug.

When the testing starts, the count of bugs keeps increasing. Does that mean testing must stop upon hitting a certain number of bugs? On the contrary, testing’s target must never be to achieve a predefined bug count target. And besides, a rising bug count could be a sign of a bigger problem which might be a solid indicator to stop testing entirely.

Continuous monitoring helps with taking the right action at the right time. The testing must stop if the system is too buggy, which prevents further testing. A simple example is if you need to test the same web page on desktop and mobile. While testing, if you encounter numerous bugs on the desktop, you can halt the process on mobile until a fix is released.

Should we always follow Exit Criteria?

Exit criteria determine when a particular activity can be considered complete. Defining Exit Criteria happens during test planning and strategy, and they help the team understand when to stop testing. Some of the standard exit criteria include:

  1. Executing all planned test cases – This is one scenario to consider testing complete. The team works through every test case and runs them during the testing. Suppose the team encounters a problem that prevents progress while executing a test case. In that case, they mark it as ‘failed’ if it’s a bug or ‘skipped’ for any other reason, with full details recorded.
  2. Retesting and closing all showstopper/blocker bugs – Closing all critical bugs is necessary for any testing team before they can provide a sign-off. The team must ensure that all blockers and critical bugs are addressed, fixed, and retested.
  3. Creation of Testing Documents – As part of the formal testing cycle, the team creates several testing artifacts. The list of documents needed is specified during the exit criteria during the planning stage. Once the team completes all the listed documents, they can mark testing as done.
  4. Hitting the release deadline – When creating the project timeline, consider each activity and estimate hours accordingly. Even so, software releases can be delayed when encountering unforeseen circumstances. Testing is a time-dependent activity, and it’s common to see a reduced testing window. When the deadline draws near, and there is less time available for testing, working through all the planned test cases becomes difficult. In such cases, the team determines a high-level testing path.

When To Stop Testing

How do we know when to stop testing with no Exit Criteria?

With the increasing trend of startups, the need for one-time testing is growing, where the task is only to get the software tested and bugs reported. These organizations do not follow strict processes and usually skip documenting the exit criteria.

With no exit criteria available, it is essential to have a conversation upfront and get clarity on the deliverables and expectations. Some of these factors help determine when to stop testing.

  1. Understand the client’s requirement on whether they need only a bug report or other testing documents like test cases, etc.
  2. Prepare a list of the types of testing needed.
  3. Ask for their list of some of the top browsers and devices. Testing on all browsers and devices is nearly impossible, so having a list is always handy and helps determine the scope.
  4. Understand the timeline.

Testing a one-time project requires going through every functionality and running all types of testing, like performance, and security, on all the listed browsers and devices. Testing concludes by submitting a report with all the findings. Closing or marking testing finished, in such cases, is generally based on a mutual decision.

Conclusion

There is no such thing as a hard stop to testing. Although some believe it is a never-ending process, we all agree that testing must be marked as complete before a software rollout. However, the criteria for considering testing as complete varies across projects.

Priya Rani
Author

Priya Rani

An Enthusiastic QA Expert who loves to share knowledge and experience through blogging.