Let's examine testing using a popular unit testing framework with annotations, JUnit, but first we'll take a look at unit testing concepts and practices.
The practice of unit testing is to automate testing every important small unit of code, especially after changes, to ensure that intended functionality is not broken by changes. The set of units in a given test is called the test coverage. We will be calling the code being tested the target code and the code which we create to verify its operation the test code.
Annotations may be used to target specific functionality in unit tests.
The primary goal is to ensure that newly written code functions properly, ideally under all scenarios. We also want to detect any regressions created by new code we write or changes made to existing code. If our test code is correctly written the first time, these tests can alert us to changes that break existing functionality. Another important purpose of unit tests is to automate tedious and repetitive testing. Detailed unit tests will allow us to use our precious test resources (either ourselves or dedicated testers) to focus on cases that are hard to automate such as look and feel issues in a graphical user interface or complicated workflows. Lastly, unit tests allow us to interact with our target code as a user would. This could give us an idea of how easy the target code is to interact with or if the data returned is easily consumable.
We want to verify that functionality works as intended, that features implemented are performing as expected, so basic functionality should be checked. Most testing focuses on the expected path of execution using common well-defined data without any unusal cases or errors, sometimes called the "happy path" [@Meszaros2011]. Other areas to focus on are branches, error conditions and edge cases.
Edge cases refer to parameters at the edges of their boundaries, such as empty arrays, zero-length strings, or numeric values at the end of their ranges (e.g. zero or the largest positive/negative number for a numeric datatype). We test edge cases because these are often not on the happy path and therefore not always handled properly. Take for example this simple max method implemented in Java. Note the subtle bug when a zero-length array is passed in.
This bug might not have been found without adding tests calling max()
with an empty array and a null array.
public int max(Integer [] arr) {
int currMax = arr[0];
for (int i : arr) {
if (i > currMax) {
currMax = i;
}
}
return currMax;
}
There is a special concern about edge cases as many security breaches have been due to exploiting boundary conditions that were thought to be impossible in normal usage. Some examples of things that are "impossible" yet found in the wild include URLs that are thousands of characters long [@Cert821156]; internet communication packets where the advertised size is different than the actual size; and "invalid" documents having malformed internal references or incorrect syntax [@OwaspXmlCheatSheet, @Bugtraq20080228]. Unusual situations and conditions like these and others that could lead to errors should be tested as well. Some additional tests should test how code behaves using empty or null inputs and other "nonsense" data and impossible conditions.
In the Clean code handbook, @Martin2008, offers an acronym FIRST to illustrate the values inherent in unit testing
- Fast: many hundreds or thousands per second
- Isolates: failure reasons become obvious
- Repeatable: run in any order, at any time
- Self-validating: no manual evaluation required
- Timely: written before the code
You may google Brett Schuchert or Tim Ottinger to find more details about the meaning of this acronym.
Timely in the FIRST acronym suggests a popular and controversial approach to software development, creating tests as the first and motivating activity in software development. You can tell that it is controversial by reading bloggers like David Hansson and James Coplien. The main issue Hansson takes with it seems to be that it is inadvertently used as a design tool and leads to unintended monstrosities of design. Like many good ideas in software development, it may work best if practiced in moderation and where it is obviously needed. Test-driven development necessarily treats tests as design building blocks. If testing is a form of design, it must be subject to all the problems faced in making design choices. Design choices don't simply vanish with a test-driven approach.
When follwing a test-first approach, verify that newly created tests fail before writing the code in the target class. If the test already passes with no code written then the new feature already exists, or the test is written incorrectly.
-
Unit tests should be independent of each other. Two tests should not be testing the same thing.
-
Tests should expose designs that are resistant to testing, since such design may be harder and more expensive to debug than more transparent designs.
-
Test failures may reveal that there is a problem with the test code instead of the target code. An existing test failing could also indicate a regression that and a new error has been introduced with the most recent change.
-
Unit tests should form a kind of documentation about what code is supposed to be doing.
-
Unit tests should limit time wasted on vestigial features, since the most expedient response to a test is to focus on passing it and nothing else.
By focusing on the scope of a given test, it's interactions with classes and methods in target code, we can begin to place tests into different broad categories @Fowler2012, @Vocke2018.
\begin{figure}[htpb] \begin{center} \includegraphics[width=4in]{fiTestingPyramid.png} \end{center} \caption{The testing pyramid}\label{fiTestingPyramid.png} \end{figure}
The simplest tests to think about are Unit Tests. Unit tests are low-level tests that have no interactions with external resources or processes such as databases, web servers, or user interactions. Because there are no external systems requiring setup and teardown time or input-output overhead, these tests run extremely fast. It is not unusual for a unit test to run in less than 10 msec, allowing hundreds or thousands of unit tests to run in less than a minute. Unit tests are often used when verifying the functionality of methods in a single class.
Functional Tests verify interactions with a system when external dependencies are included such as a data store or files. These tests often include interactions in different classes in a package or different levels in a single system. Tests for correctness of a system's features is often done with functional tests.
Often we want to verify interactions that a user would have with a system or interactions between two or more "large" software systems. Unit and functional tests usually simulate or create a stand-in for the external connected systems. When we are using these actual external systems in our tests, we are performing integration testing. Integration tests take longer to execute than unit and functional tests. The external systems need to be started before and stopped after the test runs, and any setup of scenarios needs to take place before a test runs. Integration tests are often used to test a system's external interface, such as an exposed REST API. Integration testing is also used to ensure that individual applications or services in a complex set of systems all function properly with respect to message formats, error handling, input and output formats, and testing with multiple users.
There are cases where automated methods cannot be easily applied. This could be in the case of a system that interfaces with other external systems or the primary mode of interaction is a graphical user interface. Another scenario is when real-world input is required, such as a payment processing platform needing physical card swipe transactions, or a warehouse tracking application using NFC (Near-field communication) tags. A person performs the steps in a test, observes results, and records outcomes. Because of the direct human interaction with a system, only the most critical functionality should be manually tested. The functionality of graphical user interfaces are frequently tested manually.
It seems that "Unit Test" has multiple meanings in this chapter. The concepts discussed in this chapter fall under "unit testing", the techniques and methodolgies of testing units of software. We use the classification "Unit Test" when we want to contrast these small resource-independent tests units with Functional, Integration, and Manual Tests.
JUnit is a very successful example of a unit testing framework. While it was originally written by Kent Beck for another language, Smalltalk, and originally called SUnit, it gained widespread popularity when ported to Java as JUnit. It has since been ported to many other languages and is known generically as xUnit where the x is replaced by the first letter of the name of the targeted language. Wikipedia currently lists dozens of xUnit frameworks among over a hundred total unit testing frameworks under its article on unit testing frameworks.
Kent Beck described the generic components in an article Simple Smalltalk Testing, available in various formats online, including @Beck1997. More detail is available in Wallace library in the book xUnit test patterns by Gerard Meszaros, @Meszaros2007. The components distilled by Wikipedia include
- Test Fixture: or preconditions or context, is the known good state before the tests
- Test parent class: basis for all test classes
- Test suite: the set of all test classes related to a precondition
- Test runner: an executable program setting up preconditions, running the tests and reporting results, and returning to good state
- Test assertions: functions predicated on logical conditions that evaluate to true if the code behaves correctly, allows throwing an exception that ends a test if incorrect behavior or state is attained
- Test formatter: allows output to be read by either human or other programs
Java annotations are syntactic metadata you can add to source code for various purposes. They are essential for Java testing libraries, and are used in many other situations. An annotation begins with an at sign. For example, the built-in annotation @Override
checks that a method is an override.
An annotation may be processed at compile time. An annotation may be embedded in a class file and retrieved at runtime. An annotation may simply be informational but it can also support reflection, meaning that it can be used to allow a program to examine and modify its structure or behavior at runtime.
In addition to built-in annotations, you may create your own. The annotation concept is central to many programming frameworks, including the one we're going to study under this topic.
These are annotations as previously described. The annotations are added to the code to be tested. The supported annotations include @Before
, @Test
, and @After
. The following test fixture example can be found in Wikipedia.
import org.junit.*;
public class FoobarTest {
@BeforeClass
public static void setUpClass() throws Exception {
// Code executed before the first test method
}
@Before
public void setUp() throws Exception {
// Code executed before each test
}
@Test
public void testOneThing() {
// Code that tests one thing
}
@Test
public void testAnotherThing() {
// Code that tests another thing
}
@Test
public void testSomethingElse() {
// Code that tests something else
}
@After
public void tearDown() throws Exception {
// Code executed after each test
}
@AfterClass
public static void tearDownClass() throws Exception {
// Code executed after the last test method
}
}
Make a directory called 06junit
and switch to it.
Create a file called Calculator.java
and write the following code into it.
public class Calculator {
public int evaluate(String expression) {
int sum = 0;
for (String summand: expression.split("\\+"))
sum += Integer.valueOf(summand);
return sum;
}
}
Then create a file called CalculatorTest.java
and write the following code into it.
import static org.junit.Assert.assertEquals;
import org.junit.Test;
public class CalculatorTest {
@Test
public void evaluatesExpression() {
Calculator calculator = new Calculator();
int sum = calculator.evaluate("1+2+3");
assertEquals(6,sum);
}
}
Now you must download the file 06junit.tar
from myCourses and open it in this directory. First use a browser to download it to the Downloads folder, then issue the following commands.
cd ~/422/06build
tar xvf ~/Downloads/06junit.tar
ls
\noindent You should see two new files called junit-4.12.jar
and\linebreak hamcrest-core-1.3.jar
. Now you can compile and run the programs using the following commands.
javac -cp .:junit-4.12.jar CalculatorTest.java
javac Calculator.java
java -cp .:junit-4.12.jar:hamcrest-core-1.3.jar \
org.junit.runner.JUnitCore CalculatorTest
By now it has no doubt occurred to you to integrate testing into the build process. As an example, create the following integration of JUnit with Ant. Execute the following commands at a terminal prompt. These commands assume that you have downloaded 06junit.tar
from myCourses.
cd ~/422
mkdir 07junitWithAnt && cd 07junitWithAnt
mkdir lib src test
cd lib
tar xvf ~/Downloads/06junit.tar
cd ..
Now create a file called build.xml
and write the following code into it.
<project name="JunitTest" default="test" basedir=".">
<property name="testdir" location="test" />
<property name="libdir" location="lib" />
<property name="srcdir" location="src" />
<property name="full-compile" value="true" />
<path id="classpath.base"/>
<path id="classpath.test">
<pathelement
location="${libdir}/hamcrest-core-1.3.jar" />
<pathelement location="${libdir}/junit-4.12.jar" />
<pathelement location="${testdir}" />
<pathelement location="${srcdir}" />
<path refid="classpath.base" />
</path>
<target name="clean" >
<delete verbose="${full-compile}">
<fileset dir="${testdir}" includes="**/*.class"/>
</delete>
</target>
<target name="compile" depends="clean">
<javac srcdir="${srcdir}" destdir="${testdir}"
verbose="${full-compile}">
<classpath refid="classpath.test"/>
</javac>
</target>
<target name="test" depends="compile">
<junit>
<classpath refid="classpath.test" />
<formatter type="brief" usefile="false" />
<test name="TestMessageUtil" fork="true"/>
</junit>
</target>
</project>
Next, change to the src
directory and write the following code into a file called MessageUtil.java
.
/*
* This class prints the given message on console.
*/
public class MessageUtil {
private String message;
//Constructor
//@param message to be printed
public MessageUtil(String message){
this.message = message;
}
// print the message
public String printMessage(){
System.out.println(message);
return message;
}
// add "Hi!" to the message
public String salutationMessage(){
message = "Hello" + message;
System.out.println(message);
return message;
}
}
Finally, create a file called TestMessageUtil.java
in the same directory and write the following code into it.
import org.junit.Test;
import org.junit.Ignore;
import static org.junit.Assert.assertEquals;
public class TestMessageUtil {
String message = " World";
MessageUtil messageUtil = new MessageUtil(message);
@Test
public void testPrintMessage() {
System.out.println("Inside testPrintMessage()");
assertEquals(message,messageUtil.printMessage());
}
@Test
public void testSalutationMessage() {
System.out.println("Inside testSalutationMessage()");
/* use the following to make the test succeed */
message = "Hello" + " World";
/* use the following to make the test fail */
/* message = "Hello" + " Weird"; */
assertEquals(message,messageUtil.salutationMessage());
}
}
Now you can change to the directory containing the build.xml
file and say ant
. The output should conclude with something like the following.
test:
[junit] Testsuite: TestMessageUtil
[junit] Tests run: 2, Failures: 0, Errors: 0,
Skipped: 0, Time elapsed: 0.079 sec
[junit]
[junit] ------------- Standard Output -----------
[junit] Inside testSalutationMessage()
[junit] Hello World
[junit] Inside testPrintMessage()
[junit] World
[junit] ------------- ---------------- ----------
BUILD SUCCESSFUL