Testing is a very important aspect of development and can determine the fate of an application to a large extent. Good tests can catch issues that cause your application to crash early, but poor tests often lead to failures and downtime all the time.
While there are three main types of software testing: Unit testing, functional testing, and integration testing, in this blog post, we will discuss developer-level unit testing. Before I get into the specifics, let’s review the details of each of these three tests.
Unit testing is used to test individual code components and ensure that the code works as expected. Unit tests are written and executed by developers. Most of the time, a testing framework like JUnit or TestNG is used. Test cases are usually written at the method level and executed through automation.
Integration testing checks whether the system is working as a whole. Integration testing is also done by developers, but instead of testing a single component, it is designed to test across components. The system consists of many individual components such as code, database, web server, etc. Integration testing can uncover problems such as component wiring, network access, database issues, etc.
Functional testing checks whether each feature is implemented correctly by comparing the results for a given input to the specification. Typically, this isn't at the developer level. Functional testing is performed by a separate testing team. Test cases are written based on specifications and actual results are compared with expected results. There are several tools available for automated functional testing, such as Selenium and QTP.
As mentioned earlier, unit tests help developers determine whether the code is working properly. In this blog post, I will provide useful tips for unit testing in Java.
Java provides several frameworks for unit testing. TestNG and JUnit are the most popular testing frameworks. Some important features of JUnit and TestNG:
Easy to set up and run.
Support comments.
Allows certain tests to be ignored or grouped and executed together.
Supports parameterized testing, which means running unit tests by specifying different values at runtime.
Supports automated test execution by integrating with build tools such as Ant, Maven and Gradle.
EasyMock is a mocking framework that complements unit testing frameworks such as JUnit and TestNG. EasyMock itself is not a complete framework. It just adds the ability to create mock objects for easier testing. For example, one method we want to test can call a DAO class that gets data from a database. In this case, EasyMock can be used to create a MockDAO that returns hardcoded data. This allows us to easily test our intended methods without having to worry about database access.
Test-driven development (TDD) is a software development process in which we write tests based on requirements before starting any coding. Since there is no coding yet, the test will initially fail. Then write the minimum amount of code to pass the test. Then refactor the code until it is optimized.
The goal is to write tests that cover all requirements, rather than writing code from the beginning that may not even meet the requirements. TDD is great because it results in simple modular code that is easy to maintain. The overall development speed is accelerated and defects are easily found. Furthermore, unit tests are created as a by-product of the TDD approach.
However, TDD may not be suitable for all situations. In projects with complex designs, focusing on the simplest design to facilitate passing test cases without thinking ahead can lead to huge code changes. Furthermore, TDD methods are difficult to use for systems that interact with legacy systems, GUI applications, or applications that work with databases. Additionally, tests need to be updated as the code changes.
Therefore, before deciding to adopt the TDD approach, the above factors should be considered and measures should be taken according to the nature of the project.
Code coverage measures (expressed as a percentage) the amount of code that is executed when unit tests are run. Generally, code with high coverage has a lower chance of containing undetected bugs because more of its source code is executed during testing. Some best practices for measuring code coverage include:
Use a code coverage tool such as Clover, Corbetura, JaCoCo, or Sonar. Using tools can improve the quality of your testing because they can point out areas of your code that are not being tested, allowing you to develop additional tests to cover those areas.
Whenever new functionality is written, write new test coverage immediately.
Make sure there are test cases covering all branches of the code, i.e. if/else statements.
High code coverage does not guarantee perfect testing, so be careful!
The concat
method below accepts a boolean value as input and appends two strings only if the boolean value is true:
public String concat(boolean append, String a,String b) { String result = null; If (append) { result = a + b; } return result.toLowerCase(); }
The following is the test case for the above method:
@Test public void testStringUtil() { String result = stringUtil.concat(true, "Hello ", "World"); System.out.println("Result is "+result); }
In this case, the value of the executed test is true. When the test executes, it will pass. When the code coverage tool is run, it will show 100% code coverage because all the code in the concat
method was executed. However, if the test executes with a value of false, NullPointerException
will be thrown. So 100% code coverage does not really mean that the tests cover all scenarios, nor does it mean that the tests are good.
Before JUnit4, the data to be run by the test case had to be hard-coded into the test case. This results in a limitation that in order to run tests with different data, the test case code must be modified. However, JUnit4 as well as TestNG support externalizing test data so that test cases can be run against different datasets without changing the source code.
The following MathChecker
class has methods to check whether a number is odd:
public class MathChecker { public Boolean isOdd(int n) { if (n%2 != 0) { return true; } else { return false; } } }
The following is the TestNG test case for the MathChecker class:
public class MathCheckerTest { private MathChecker checker; @BeforeMethod public void beforeMethod() { checker = new MathChecker(); } @Test @Parameters("num") public void isOdd(int num) { System.out.println("Running test for "+num); Boolean result = checker.isOdd(num); Assert.assertEquals(result, new Boolean(true)); } }
The following is testng.xml (the configuration file for TestNG) which has the data for which the tests are to be executed:
<?xml version="1.0" encoding="UTF-8"?> <suite name="ParameterExampleSuite" parallel="false"> <test name="MathCheckerTest"> <classes> <parameter name="num" value="3"></parameter> <class name="com.stormpath.demo.MathCheckerTest"/> </classes> </test> <test name="MathCheckerTest1"> <classes> <parameter name="num" value="7"></parameter> <class name="com.stormpath.demo.MathCheckerTest"/> </classes> </test> </suite>
As can be seen, in this case the tests will be executed Twice, once each for values 3 and 7. In addition to specifying test data through XML configuration files, test data can also be provided in classes through DataProvider annotations.
Similar to TestNG, test data can also be externalized for JUnit. The following is a JUnit test case for the same MathChecker class as above:
@RunWith(Parameterized.class) public class MathCheckerTest { private int inputNumber; private Boolean expected; private MathChecker mathChecker; @Before public void setup(){ mathChecker = new MathChecker(); } // Inject via constructor public MathCheckerTest(int inputNumber, Boolean expected) { this.inputNumber = inputNumber; this.expected = expected; } @Parameterized.Parameters public static Collection<Object[]> getTestData() { return Arrays.asList(new Object[][]{ {1, true}, {2, false}, {3, true}, {4, false}, {5, true} }); } @Test public void testisOdd() { System.out.println("Running test for:"+inputNumber); assertEquals(mathChecker.isOdd(inputNumber), expected); } }
As can be seen, the test data on which the test is to be performed is specified by the getTestData() method. This method can be easily modified to read the data from an external file instead of hardcoding the data.
Many novice developers are accustomed to writing System.out.println statements after each line of code to verify whether the code is executed correctly. This practice often extends to unit tests, resulting in cluttered test code. Besides the confusion, this requires manual intervention by the developer to verify the output printed on the console to check whether the test ran successfully. A better approach is to use assertions that automatically indicate test results.
The following StringUti
class is a simple class with a method that connects two input characters and returns the result:
public class StringUtil { public String concat(String a,String b) { return a + b; } }
The following is Two unit tests for the above method:
@Test public void testStringUtil_Bad() { String result = stringUtil.concat("Hello ", "World"); System.out.println("Result is "+result); } @Test public void testStringUtil_Good() { String result = stringUtil.concat("Hello ", "World"); assertEquals("Hello World", result); }
testStringUtil\_Bad will always be passed because it has no assertions. Developers need to manually verify the test output on the console. testStringUtil\_Good will fail if the method returns incorrect results and does not require developer intervention.
Some methods do not have deterministic results, that is, the output of the method is not known in advance and can change every time. For example, consider the following code, which has a complex function and a method that calculates the time in milliseconds it takes to execute the complex function:
public class DemoLogic { private void veryComplexFunction(){ //This is a complex function that has a lot of database access and is time consuming //To demo this method, I am going to add a Thread.sleep for a random number of milliseconds try { int time = (int) (Math.random()*100); Thread.sleep(time); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public long calculateTime(){ long time = 0; long before = System.currentTimeMillis(); veryComplexFunction(); long after = System.currentTimeMillis(); time = after - before; return time; } }
In this case, each The next time the calculateTime
method is executed, it will return a different value. Writing test cases for this method will not be of any use since the output of this method is variable. Therefore, the test method will not be able to verify the output of any particular execution.
Typically, developers spend a lot of time and effort writing test cases to ensure that the application works as expected. However, it is also important to test negative test cases. Negative test cases refer to test cases that test whether the system can handle invalid data. For example, consider a simple function that reads an alphanumeric value of length 8, typed by the user. In addition to alphanumeric values, the following negative test cases should be tested:
User-specified non-alphanumeric values such as special characters.
User-specified null value.
User-specified value greater than or less than 8 characters.
Similarly, boundary test cases test whether the system is suitable for extreme values. For example, if the user wishes to enter a numeric value from 1 to 100, then 1 and 100 are the boundary values and it is very important to test the system for these values.
The above is the detailed content of 7 Tips for Writing Java Unit Tests. For more information, please follow other related articles on the PHP Chinese website!