False Positives & False Negatives In Software Automation Testing: What You Should Know
While performing software automation testing, there are two terms that nobody is ever fond of: false positive and false negative. Both situations can cause inconvenience to our development process as they directly affect our test results.
Picture Credit: https://www.bugseng.com/content/what-are-costs-false-positives-and-false-negatives
From the above image, we can notice that there’s true negative and true positive too, which suggests the code has truly failed or passed the test. False positive and false negative are quite the opposite and in this post, we’ll understand them in detail.
What Is False Positive?
A situation where a test result says a test failed because of some error although no error was there. This increases the development cost as the testers look for bugs that don’t exist. A false positive can occur due to various reasons that may include incorrect test data, cases that are not supported by the automation tool in use, or any unstable changes.
What Is False Negative?
A false negative is the stark opposite of a false positive — a situation where a test result says there are zero errors although in reality, errors are there. A false negative is a worse situation because it gives us confidence that our code is correct and we might proceed without checking for bugs.
Besides, when the code is deployed, it may lead to issues like data security compromise, server failures, app crashes, inability to function the way it’s designed to, and exposing a vulnerable system. Sounds terrible, doesn’t it?
A false negative can also occur due to reasons such as incorrect test data, weak test environment, etc. This situation can lead to bugs in production which can lead to less trust among the users and more costs in development too.
Important Question: Why Do They Occur?
Picture Credit: http://simply-the-test.blogspot.com/2019/12/false-positives.html
Why a false positive or a false negative occurs is a question that most developers and testers grapple with. Imagine a situation where we’re entering incorrect test data, this data could be something that the system was supposed to ignore so it would not go through the inside code of the product.
In such a case, the test case may pass while it would not be able to test the code it was meant to test. Thus, if our test data itself is faulty, it will lead to faulty test results so it’s important for us to review our test cases as well as test data before we automate them.
Often flaky and unstable tests can also lead to false negatives or false positives because these are the cases that can pass or fail randomly and the developers might fail to notice when there was a genuine bug or error.
Thus, if there are scenarios where the code of functionality or its elements were changed, developers should ensure to make corresponding changes in related test cases too to derive reliable and correct results.
Apart from these issues, setting up a test environment can also become a reason. When the environment is not controlled and restricted, there could be third-party elements that could tamper with the results and result in false positives and false negatives.
These are issues that take the most time to isolate because here a third party is creating issues where there were no issues in the first place. This is why, whether we’re a beginner or an experienced developer or tester, we might want to start with the basics — understand the automation environment before automating our tests.
Picture Credit: Sticky Minds
According to Sticky Minds, 23% of false positives occur due to test data issues and at least 16% occur due to test environment issues.
Best Practices For Avoiding False Negatives & False Positives
Write Better Test Cases: We have to pay utmost attention while writing test cases and to avoid either of the falses, we have to create a solid test plan and testing environment. Before we write a test case, let’s ask ourselves these questions — what part of code are we going to test? How many ways can that code fail? How do we catch it if something unexpected happens?
It’s also important to maintain a log of all the changes and review the test cases before we send them out for automation. Also, generic test cases don’t solve any purpose, the test cases should be specific to what is being tested and appropriate errors should be reported for appropriate failures.
Write Better Unit Test Cases: While writing unit test cases, we have to write both positive and negative test cases (also, famously known as happy and unhappy path cases). If we don’t write test cases for both paths, our tests won’t be considered complete.
Allow us to explain with an example. If we’re testing out a form, if we submit correct information like name, password, and email address within the allowed number of strings, it works as it should. But when we input incorrect data, it should give out an error message. A good unit testing scenario should cover both of these paths for all the form elements
Randomization Of Test Cases: One way to catch false negatives is to randomize our test cases. This means while testing our code, we shouldn’t hardcode our input variable instead we should generate appropriately randomized input data. Not clear? For example, let’s say we have a function that performs a square root and we have a bug in our code.
Our test gives the correct response for a square root of 25 but gives an incorrect response for other numbers. Our unit tests may not catch this bug if we always test our function with the square root of 25. The best way to test is to randomly pick a number, square it, test our function with the squared value as input, and check if our function is returning the same value.
Randomization Of Unit Test Order: If we’ve got components that are completely independent of each other, then we can unit test our code in random order. This is useful when the tests are state-driven because the state of one test may interfere with the state of another test.
Choose A Reliable Automation Environment: An automation testing environment can make or break our tests. We recommend picking reliable and trustworthy automation tools and be extra meticulous while setting up the test environment. A good automation tool when combined with a good test plan can exponentially reduce the false tests.
Automation testing is also proven to be more cost-effective and time-effective. One way to reduce false positives and false negatives are to rigorously follow the best automation testing practices like creating quality test scripts, implementing continuous testing, introducing reliable and well-planned QA practices. In fact, we’ve written an article on the best automation testing practices that we recommend you check out before you proceed.
Testsigma’s cloud-based automation testing tool is trusted by several global organisations, and it lets you plan your tests in a better fashion. The tool comes equipped with a test management module that lets testers executive a solid test plan for maximum test coverage.
You can stay organised by assigning the right test case types to the test cases that you want to test and even set the right priorities based on the testing needs.