10 Common Testing Terms Explained
Like any professional discipline, being a software tester requires you to learn a whole vocabulary of terms that have distinct meanings. Whether you’re a newbie or seasoned professional, knowing how to speak the language will help you communicate with both fellow testers and non-tester colleagues.
In this article we’ll be covering some commonly used terms in software testing and explain what they all mean. We’ll also provide some links to further content that’ll help you understand it better.
Of course, some of these terms will be common to testing in other domains – Pharmacy, Engineering and such – and maybe these terms will help you to communicate with people from them too!
So, let’s start with the first one…
1. Acceptance Testing
This is a phrase which refers to the formal testing process that usually takes place near the end of a project life cycle. It is intended to check that the finished system is accepted by whoever commissioned it or whoever will be using it – that they will accept the product. The main goal is to establish confidence in the product and to validate the development work done.
Although it may be assumed that the customer or user tests the product themselves, this depends on the product and how the project is managed. Sometimes an independent third-party will carry out the tests to ensure impartiality.
Acceptance tests are usually black-box in nature, since the tests should be carried out from the perspective of an end-user. Some related terms you might hear are “alpha” and “beta” testing. Alpha testing refers to acceptance testing that takes place within the development company or department, and beta testing is testing the product on external users.
2. Stress Testing
During stress testing, the target system is put under an exaggerated amount of operational strain. The aim of this is to ensure your system’s stability and that it can stand up to the most strenuous circumstances in the wild.
In the case of a website, one may use JMeter to simulate thousands of concurrent visitors; to test a new graphics card or processor, there are a variety of tools one can use to push it to its limits.
Stress testing takes place while the system is still in production, as it’s best to identify bottlenecks and instabilities as soon as possible. The actual tests may be carried out after the implementation of a certain feature or development cycle, but the system must be in a stable state in order to do so.
3. Maintenance Testing
Even the best projects can experience post-release bugs, which must be resolved by patches, updates and new versions. This is where maintenance testing comes in, which is concerned with testing applied changes and their impact on the system.
Maintenance tests are performed to either identify/diagnose problems in a system, or to confirm that the fixes put in place have worked and not destabilized the rest of the system. In the later case, the testing comes in two stages; first, the features modified are directly tested to make sure the bug is eliminated, and then regression testing is carried out to ensure the rest of the system hasn’t been adversely affected by the changes.
It is important that the project is developed with maintenance testing in mind, with functions and objects that can still easily be tested even after deployment.
4. Regression Testing
When a new feature or bug fix is implemented in a complex system, there’s a good chance that the change made will have unforeseen consequences to the rest of the system. These effects may immediately present themselves, since they might only apply in edge-case situations. This can present a nightmare situation for developers, where fear of damaging other areas of the system prevents them from effectively fixing bugs. Running regression tests after changes are made can help to reduce this problem.
As the name suggests, regression testing involves re-running past test cases to ensure established features still work correctly. A big part of regression test planning is figuring out the minimum amount of test cases that need to be run (because that takes time) to judge that the system is stable.
As you can imagine, this strategy depends a lot on having a well organized library of test cases. They must encompass not only the correct working features of the system, but also have cases for any bugs raised and fixed, since further changes to the system may cause previous bugs to reappear. If you’re looking for a good tool to organize your test cases, check out our trooTest Test Management product – grab a free account and give it a whirl!
5. Black-Box Testing
Unlike regression or stress testing, the term black-box testing is less of a testing method and more of a description of what level the test operates. Black-box testing, also known as functional testing, describes any test that is concerned with the user-facing functionality of the system. Little concern is paid to the inner workings of the application; like the name suggests, the system is a “dark box” that we cannot see into.
Black-box tests are often derived from external descriptions of the system. This works especially well with practices like Behaviour Driven Development, which has a strong focus on user behaviour and top-down planning. Black-box tests are often manual in nature, but they don’t have to be; tools like Selenium allow you to simulate front-end user behaviour on websites, which serves the purpose of testing the user-facing side of the system.
The aforementioned Acceptance Testing often consists mainly of black-box tests too, since these test are run from the users point of view. This is a good example of how the term black-box testing is not a testing method itself, but an adjective to describe other testing methods.
6. White-Box Testing
White-box testing is the opposite of black-box testing. It ignores the user-facing side and focuses on testing the inner workings of the systems code for bugs or areas in need of optimization. It’s also known as clear-box testing because the system – the box – can be seen into and investigated. Like black-box testing, white-box testing is a level of testing rather than a process itself.
To conduct white-box testing, the tester must be reasonably well-versed in the programming languages and techniques used by the system developers, and must know the structure of the application. Automated and Unit tests play a key role in this, with large systems requiring whole swathes of unit tests to verify the integrity of each function.
7. Unit Testing
Unit testing, also known as Component Testing, involves writing tests that can be directly executed against a system’s functionality. These are test scripts that plug straight in to the system’s code and exercise it with different input, queries and function calls. The main benefit of this is that it unambiguously assess the structure and quality of the code written and summarise it in a quantifiable way.
A classic example of unit testing is JUnit, a unit testing framework for testing Java applications. The tester zeroes in on a particular function in the system, then writes a script which utilizes the function. The script also has some predefined expected output, which it then compares with the actual output of the function. If the expected and actual outputs match, the test passes; if they don’t match, the test fails.
Since unit testing is automated and returns clear pass/fail results, it’s a great way to easily assess the stability of a system. If possible, the system should have a unit test for as many functions as possible. That way, if an error is reported, the unit tests can identify exactly which functions in the code are failing.
8. Automated Testing
Atuomated tests have obvious benefits: they are typically faster than a manual test, they can be run outside of office hours, they are not susceptible to human problems like boredom, and they can be compiled into suites to allow batches of them to run consecutively.
Of course, this comes at a price, which is usually an impediment on development work. The automated tests often need hooks (HTML ids, class, etc) to target their tests or if you’re running an automated test on a code funtion, the function must produce some measurable output to be compared to the expected output.
There are some great tools for automated testing, and should definitely be part of your testing strategy, but there are some jobs and instances where their inflexibility becomes an issue. Because of this, manual testing is, for the moment, still an indispensable part of software testing.
9. Compatibility Testing
If you’re building a system that needs to run on various different software or hardware platforms, compatibility testing will be a big part of your testing strategy. Compatibility tests ensure that a system runs correctly in all the required environments, whether that’s browsers, operating systems or hardware.
Compatibility testing is most likely to be black-box and manual in nature. For example, a website that needs to work on all the most recent browsers will need to undergo thorough compatibility testing. Speaking of which, a previous trooTest article compares the various ways to test a website across different browsers and operating systems.
10. Exhaustive Testing
Exhaustive testing means to test all possible input and action permutations in across a system. For example, if there is a text field which accepts a maximum of three non-repeating characters, there would be 17576 possible combinations (26 * 26 * 26) of alphabetical characters alone.
It is also very rare for a system to pass a truly exhaustive test, since there will always be a few things that fail. However, it is likely that these will be edge case scenarios and very unlikely to occur. Systems that do pass exhaustive testing tend to be very small and have very few functions. And even then, they need to be ridiculously simplistic; as one of the linked articles shows, a function that adds two 32 bit numbers together would take almost 600 million years to exhaustively test!
This article has been taken from the original at http://trootest.com/blog/10-common-testing-terms-explained