Using the Testing Framework

The Wolfram Language provides a framework in which to write and run tests of your code.
The main functions to define tests are VerificationTest and TestCreate.
VerificationTest was the first testing function created for the Wolfram Language. The behavior is straightforward: the test is evaluated immediately and a TestObject is returned.
TestCreate only creates the test, without running it immediately:
To run the test, you can use TestEvaluate:
Because TestCreate is not evaluating the actual test, creating tests is a fast operation:
This evaluates the tests. Note that the pauses are evaluated sequentially:
You might want to use ParallelMap in order to speed up a long test run:
In this section, you have seen the main differences between TestCreate and VerificationTest. When possible, always define your tests using TestCreate. Separating creation and evaluation unlocks certain capabilities, including:
Writing Immutable Tests
When creating a test using TestCreate, no symbols contained in the input expressions are evaluated. Consider the following test:
The TestObject now contains a reference to the symbol a, and because the symbol is defined, the test will work when running TestEvaluate:
However, the test will fail if you deliberately or accidentally clear the definition of a:
In order to avoid global dependencies, you can inject values into the test using, for example, With:
You can now verify that the "Input" property contains no references to the original symbol, which will allow the test to run in complete isolation:
Writing Tests for Multistep Operations
When testing a library, a single test is often not sufficient to test more complex use cases.
For example, write a test that makes sure that a file created by Export is readable by Import and that the expression is identical to the one you started from:
One attempt to convert such an operation into a series of tests looks like this:
One problem with that test is that it combines two operations that could fail in different waysthe Export and Import of the test data. You could be tempted to break it apart in multiple tests:
The drawback with this technique is that you are breaking test atomicity, because the second test is now entangled with the first one. The "Input" property of the test contains a reference to a global variable named output:
This means that the test will start failing if you accidentally change the value of output when running only the last test in isolation, or if you use the same symbol in another test.
A better alternative is to split the test using IntermediateTest. Click the in the resulting TestObject to see the intermediate test results:
The test is now self-contained and can always run in isolation. The results for any intermediate tests can be inspected separately:
If an IntermediateTest fails, the outer test also fails, so you still get a single overall result for the test. You can inspect IntermediateTest results within the outer test to see which steps of the outer test worked and which failed.
Using TestReport
TestReport is a single function that serves multiple purposes.

Use TestReport to Run Tests

TestReport can be used to run multiple tests:
TestReport only runs "NotEvaluated" tests, which means that if you have a list of tests already evaluated, it will not evaluate those that are already evaluated:

Use TestReport to Merge Test Results

TestReport is able to merge more than one TestReportObject. Create two objects:
You can now merge the reports:
TestReport will automatically delete duplicates when merging results:

Use TestReport to Collect Tests without Evaluation

Create a test file by exporting some test expressions to a file:
When running TestReport over a file, all non-evaluated tests are executed by default:
You can instead tell TestReport to only collect tests, and not evaluate them, by using Identity as the TestEvaluationFunction. That operation is much faster, because the tests are not running:
Collecting tests can be useful to recreate test expressions and manipulate them symbolically. The "TestCreate" property will reconstruct the original test expression, wrapped in HoldForm:
You can now recreate test objects by releasing the HoldForm:
Debug and Log TestReport Run
Logging what a TestReport is doing is essential in order to debug your code. Create a long-running test suite that might fail at some point during its run:
Running the test suite will give you a progress indicator, which is enough in most situations, but it does not display all the information you might need in real time:
You can use HandlerFunctions in order to log all tests that are failing in real time:
A complete list of events is available in the documentation for TestReport, but you can quickly see what your TestReport is doing by using "UnhandledEvent":
Break TestReport Control Flow
It is possible to use HandlerFunctions to write a custom TestReport that fails as soon as a test fails:
Writing Testing Files
Writing test files for TestReport is normally a straightforward operation. Simply load any needed contexts at the beginning of your test file:
Needs["MyContext`"]

TestCreate[MyContext`AddOne[1], 2, TestID -> "MyContext-AddOne-Test"]
TestCreate[MyContext`AddTwo[1], 3, TestID -> "MyContext-AddTwo-Test"]
All the code that is not contained in TestCreate will be executed every time the test file is loaded, even if you are not running the tests. The test file can contain a list of tests or an expression that programmatically creates tests:
Needs["MyContext`"]

Table[
    TestCreate[
        MyContext`AddOne[i],
        i+1,
        TestID-> "MyContext-AddOne-" <> IntegerString[i]
    ],
    {i, 1, 20}
]
It is always a good practice to assign a meaningful TestID to each test.