class
TesterBase class for tests and benchmarks.
Contents
Supports colored output, instanced (or data-driven) tests, repeated tests (e.g. for testing race conditions) and benchmarks, which can either use one of the builtin measurement functions (such as wall time, CPU time or CPU cycle count) or any user-provided custom measurement function (for example measuring allocations, memory usage, GPU timings etc.). In addition, the behavior of the test execution can be configured via many command-line and environment options.
Make sure to first go through the Testing and benchmarking tutorial for initial overview and step-by-step introduction. Below is a more detailed description of all provided functionality.
Basic testing workflow
A test starts with deriving the Tester class. The test cases are parameter-less void
member functions that are added using addTests() in the constructor and the main()
function is created using CORRADE_*.cpp
file with no header and the derived type is a struct
to avoid having to write public
keywords.
struct MyTest: TestSuite::Tester { explicit MyTest(); void addTwo(); void subtractThree(); }; MyTest::MyTest() { addTests({&MyTest::addTwo, &MyTest::subtractThree}); } void MyTest::addTwo() { int a = 3; CORRADE_COMPARE(a + 2, 5); } void MyTest::subtractThree() { int b = 5; CORRADE_COMPARE(b - 3, 2); } CORRADE_TEST_MAIN(MyTest)
The above gives the following output:
Starting MyTest with 2 test cases... OK [1] addTwo() OK [2] subtractThree() Finished MyTest with 0 errors out of 2 checks.
Actual testing is done via various CORRADE_FAIL
with concrete file, line and additional diagnostic is printed to the output and the test case is exited without executing the remaining statements. Otherwise, if all comparisons in given test case pass, an OK
is printed. The main difference between these macros is the kind of diagnostic output they print when comparison fails — for example a simple expression failure reported by CORRADE_nullptr
value, but for comparing a 1000-element array you might want to use CORRADE_
Additionally there are CORRADE_SKIP
on output and exiting the test case right after the statement) or documenting that some algorithm produces incorrect result due to a bug, printing an XFAIL
. Passing a test while failure is expected is treated as an error (XPASS
), which can be helpful to ensure the assumptions in the tests don't get stale. Expected failures can be also disabled globally via a command-line option --no-xfail
or via environment variable, see below.
The only reason why those are macros and not member functions is the ability to gather class/function/file/line/expression information via the preprocessor for printing the test output and exact location of possible test failure. If none of these macros is encountered when running the test case, the test case is reported as invalid, with ?
on output.
The test cases are numbered on the output and those numbers can be used on the command-line to whitelist/blacklist the test cases with --only
/--skip
, randomly reorder them using --shuffle
and more, see below for details. In total, when all test cases pass, the executable exits with 0
return code, in case of failure or invalid test case it exits with 1
to make it possible to run the tests in batch (such as CMake CTest). By default, after a failure, the testing continues with the other test cases, you can abort after first failure using --abort-on-fail
command-line option.
Useful but not immediately obvious is the possibility to use templated member functions as test cases, for example when testing a certain algorithm on different data types:
struct PiTest: TestSuite::Tester { explicit PiTest(); template<class T> void calculate(); }; PiTest::PiTest() { addTests<PiTest>({ &PiTest::calculate<float>, &PiTest::calculate<double>}); } template<class T> void PiTest::calculate() { setTestCaseName(std::is_same<T, float>::value ? "calculate<float>" : "calculate<double>"); CORRADE_COMPARE(calculatePi<T>(), T(3.141592653589793)); } CORRADE_TEST_MAIN(PiTest)
And the corresponding output:
Starting PiTest with 2 test cases... OK [1] calculate<float>() OK [2] calculate<double>() Finished PiTest with 0 errors out of 2 checks.
This works with all add*()
functions though please note that current C++11 compilers (at least GCC and Clang) are not able to properly detect class type when passing only templated functions to it, so you may need to specify the type explicitly. Also, there is no easy and portable way to get function name with template parameters so by default it will be just the function name, but you can call setTestCaseName() to specify a full name.
Instanced tests
Often you have an algorithm which you need to test on a variety of inputs or corner cases. One solution is to use a for-cycle inside the test case to test on all inputs, but then the diagnostic on error will not report which input is to blame. Another solution is to duplicate the test case multiple times for all the different inputs, but that becomes a maintenance nightmare pretty quickly. Making the function with a non-type template parameter is also a solution, but that's not possible to do for all types and with huge input sizes it is often not worth the increased compilation times. Fortunately, there is an addInstancedTests() that comes for the rescue:
struct RoundTest: TestSuite::Tester { explicit RoundTest(); void test(); }; namespace { enum: std::size_t { RoundDataCount = 5 }; constexpr const struct { const char* name; float input; float expected; } RoundData[RoundDataCount] { {"positive down", 3.3f, 3.0f}, {"positive up", 3.5f, 4.0f}, {"zero", 0.0f, 0.0f}, {"negative down", -3.5f, -4.0f}, {"negative up", -3.3f, -3.0f} }; } RoundTest::RoundTest() { addInstancedTests({&RoundTest::test}, RoundDataCount); } void RoundTest::test() { setTestCaseDescription(RoundData[testCaseInstanceId()].name); CORRADE_COMPARE(round(RoundData[testCaseInstanceId()].input), RoundData[testCaseInstanceId()].expected); } CORRADE_TEST_MAIN(RoundTest)
Corresponding output:
Starting RoundTest with 5 test cases... OK [1] test(positive down) OK [2] test(positive up) OK [3] test(zero) OK [4] test(negative down) OK [5] test(negative up) Finished RoundTest with 0 errors out of 5 checks.
The tester class just gives you an instance index via testCaseInstanceId() and it's up to you whether you use it as an offset to some data array or generate an input using it, the above example is just a hint how one might use it. Each instance is printed to the output separately and if one instance fails, it doesn't stop the other instances from being executed. Similarly to the templated tests, setTestCaseDescription() allows you to set a human-readable description of given instance. If not called, the instances are just numbered in the output.
Repeated tests
A complementary feature to instanced tests are repeated tests using addRepeatedTests(), useful for example to repeatedly call one function 10000 times to increase probability of potential race conditions. The difference from instanced tests is that all repeats are treated as executing the same code and thus only the overall result is reported in the output. Also unlike instanced tests, if a particular repeat fails, no further repeats are executed. The test output contains number of executed repeats after the test case name, prefixed by @
. Example of testing race conditions with multiple threads accessing the same variable:
struct RaceTest: TestSuite::Tester { explicit RaceTest(); template<class T> void threadedIncrement(); }; RaceTest::RaceTest() { addRepeatedTests<RaceTest>({ &RaceTest::threadedIncrement<int>, &RaceTest::threadedIncrement<std::atomic_int>}, 10000); } template<class T> void RaceTest::threadedIncrement() { setTestCaseName(std::is_same<T, int>::value ? "threadedIncrement<int>" : "threadedIncrement<std::atomic_int>"); T x{0}; int y = 1; auto fun = [&x, &y] { for(std::size_t i = 0; i != 500; ++i) x += y; }; std::thread a{fun}, b{fun}, c{fun}; a.join(); b.join(); c.join(); CORRADE_COMPARE(x, 1500); } CORRADE_TEST_MAIN(RaceTest)
Depending on various factors, here is one possible output:
Starting RaceTest with 2 test cases... FAIL [1] threadedIncrement<int>()@167 at …/RaceTest.cpp on line 60 Values x and 1500 are not the same, actual is 1000 but expected 1500 OK [2] threadedIncrement<std::atomic_int>()@10000 Finished RaceTest with 1 errors out of 10167 checks.
Similarly to testCaseInstanceId() there is testCaseRepeatId() which gives repeat index. Use with care, however, as the repeated tests are assumed to execute the same code every time. On the command-line it is possible to increase repeat count via --repeat-every
. In addition there is --repeat-all
which behaves as like all add*()
functions in the constructor were called multiple times in a loop. Combined with --shuffle
this can be used to run the test cases multiple times in a random order to uncover potential unwanted interactions and order-dependent bugs.
It's also possible to combine instanced and repeated tests using addRepeatedInstancedTests().
Benchmarks
Besides verifying code correctness, it's possible to measure code performance. Unlike correctness tests, the benchmark results are hard to reason about using only automated means, so there are no macros for verifying benchmark results and instead the measured values are just printed to the output for users to see. Benchmarks can be added using addBenchmarks(), the actual benchmark loop is marked by CORRADE_BENCH
identifier. Example benchmark comparing performance of inverse square root implementations:
struct InvSqrtBenchmark: TestSuite::Tester { explicit InvSqrtBenchmark(); void naive(); void fast(); }; InvSqrtBenchmark::InvSqrtBenchmark() { for(auto fn: {&InvSqrtBenchmark::naive, &InvSqrtBenchmark::fast}) { addBenchmarks({fn}, 500, BenchmarkType::WallTime); addBenchmarks({fn}, 500, BenchmarkType::CpuTime); } } void InvSqrtBenchmark::naive() { volatile float a; /* to avoid optimizers removing the benchmark code */ CORRADE_BENCHMARK(1000000) a = 1.0f/std::sqrt(float(testCaseRepeatId())); CORRADE_VERIFY(a); } void InvSqrtBenchmark::fast() { volatile float a; /* to avoid optimizers removing the benchmark code */ CORRADE_BENCHMARK(1000000) a = fastinvsqrt(float(testCaseRepeatId())); CORRADE_VERIFY(a); } CORRADE_TEST_MAIN(InvSqrtBenchmark)
Note that it's not an error to add one test/benchmark multiple times — here it is used to have the same code benchmarked with different timers. Possible output:
Starting InvSqrtBenchmark with 4 test cases... BENCH [1] 8.24 ± 0.19 ns naive()@499x1000000 (wall time) BENCH [2] 8.27 ± 0.19 ns naive()@499x1000000 (CPU time) BENCH [3] 0.31 ± 0.01 ns fast()@499x1000000 (wall time) BENCH [4] 0.31 ± 0.01 ns fast()@499x1000000 (CPU time) Finished InvSqrtBenchmark with 0 errors out of 0 checks.
The number passed to addBenchmarks() is equivalent to repeat count passed to addRepeatedTests() and specifies measurement sample count. The number passed to CORRADE_--benchmark
.
It's possible to use all CORRADE_
The benchmark output is calculated from all samples except the first discarded samples. By default that's one sample, --benchmark-discard
and --repeat-every
command-line options can be used to override how many samples are taken and how many of them are discarded at first. In the output, the used sample count and sample size is printed after test case name, prefixed with @
. The output contains mean value and a sample standard deviation, calculated as:
Different benchmark type have different units, time values are displayed in ns
, µs
, ms
and s
, dimensionless count is suffixed by k
, M
or G
indicating thousands, millions and billions, instructions with I
, kI
, MI
and GI
, cycles with C
, kC
, MC
and GC
and memory in B
, kB
, MB
and GB
. In case of memory the prefixes are multiples of 1024 instead of 1000. For easier visual recognition of the values, by default the sample standard deviation is colored yellow if it is larger than 5% of the absolute value of the mean and red if it is larger than 25% of the absolute value of the mean. This can be overriden on the command-line via --benchmark-yellow
and --benchmark-red
.
It's possible to have instanced benchmarks as well, see addInstancedBenchmarks().
Custom benchmarks
It's possible to specify a custom pair of functions for intiating the benchmark and returning the result using addCustomBenchmarks(). The benchmark end function returns an unsigned 64-bit integer indicating measured amount in units given by BenchmarkUnits. To further describe the value being measured you can call setBenchmarkName() in the benchmark begin function. Contrived example of benchmarking number of copies when using std::
struct VectorBenchmark: TestSuite::Tester { explicit VectorBenchmark(); void insert(); void copyCountBegin(); std::uint64_t copyCountEnd(); }; namespace { std::uint64_t count = 0; struct CopyCounter { CopyCounter() = default; CopyCounter(const CopyCounter&) { ++count; } }; enum: std::size_t { InsertDataCount = 3 }; constexpr const struct { const char* name; std::size_t count; } InsertData[InsertDataCount]{ {"100", 100}, {"1k", 1000}, {"10k", 10000} }; } VectorBenchmark::VectorBenchmark() { addCustomInstancedBenchmarks({&VectorBenchmark::insert}, 1, InsertDataCount, &VectorBenchmark::copyCountBegin, &VectorBenchmark::copyCountEnd, BenchmarkUnits::Count); } void VectorBenchmark::insert() { setTestCaseDescription(InsertData[testCaseInstanceId()].name); std::vector<CopyCounter> data; CORRADE_BENCHMARK(1) for(std::size_t i = 0, end = InsertData[testCaseInstanceId()].count; i != end; ++i) data.push_back({}); } void VectorBenchmark::copyCountBegin() { setBenchmarkName("copy count"); count = 0; } std::uint64_t VectorBenchmark::copyCountEnd() { return count; } CORRADE_TEST_MAIN(VectorBenchmark)
Running the benchmark shows that calling push_
Starting VectorBenchmark with 3 test cases... BENCH [1] 227.00 insert(100)@1x1 (copy count) BENCH [2] 2.02 k insert(1k)@1x1 (copy count) BENCH [3] 26.38 k insert(10k)@1x1 (copy count) Finished VectorBenchmark with 0 errors out of 0 checks.
Specifying setup/teardown routines
While the common practice in C++ is to use RAII for resource lifetime management, sometimes you may need to execute arbitrary code at the beginning and end of each test case. For this, all addTests(), addInstancedTests(), addRepeatedTests(), addRepeatedInstancedTests(), addBenchmarks(), addInstancedBenchmarks(), addCustomBenchmarks() and addCustomInstancedBenchmarks() have an overload that is additionally taking a pair of parameter-less void
functions for setup and teardown. Both functions are called before and after each test case run, independently on whether the test case passed or failed.
Command-line options
Command-line options that make sense to be set globally for multiple test cases are also configurable via environment variables for greater flexibility when for example running the tests in a batch via ctest
.
Usage:
./my-test [-h|--help] [-c|--color on|off|auto] [--skip "N1 N2..."] [--skip-tests] [--skip-benchmarks] [--only "N1 N2..."] [--shuffle] [--repeat-every N] [--repeat-all N] [--abort-on-fail] [--no-xfail] [--benchmark TYPE] [--benchmark-discard N] [--benchmark-yellow N] [--benchmark-red N]
Arguments:
-h
,--help
— display this help message and exit-c
,--color on|off|auto
— colored output (environment:CORRADE_TEST_COLOR
, default:auto
). Theauto
option enables color output in case an interactive terminal is detected. Note that on Windows it is possible to output colors only directly to an interactive terminal unless CORRADE_UTILITY_ USE_ ANSI_ COLORS is defined. --skip "N1 N2..."
— skip test cases with given numbers--skip-tests
— skip all tests (environment:CORRADE_TEST_SKIP_TESTS=ON|OFF
)--skip-benchmarks
— skip all benchmarks (environment:CORRADE_TEST_SKIP_BENCHMARKS=ON|OFF
)--only "N1 N2..."
— run only test cases with given numbers--shuffle
— randomly shuffle test case order (environment:CORRADE_TEST_SHUFFLE=ON|OFF
)--repeat-every N
— repeat every test case N times (environment:CORRADE_TEST_REPEAT_EVERY
, default:1
)--repeat-all N
— repeat all test cases N times (environment:CORRADE_TEST_REPEAT_ALL
, default:1
)--abort-on-fail
— abort after first failure (environment:CORRADE_TEST_ABORT_ON_FAIL=ON|OFF
)--no-xfail
— disallow expected failures (environment:CORRADE_TEST_NO_XFAIL=ON|OFF
)--benchmark TYPE
— default benchmark type (environment:CORRADE_BENCHMARK
). Supported benchmark types:wall-time
— wall time spentcpu-time
— CPU time spentcpu-cycles
— CPU cycles spent (x86 only, gives zero result elsewhere)
--benchmark-discard N
— discard first N measurements of each benchmark (environment:CORRADE_BENCHMARK_DISCARD
, default:1
)--benchmark-yellow N
— deviation threshold for marking benchmark yellow (environment:CORRADE_BENCHMARK_YELLOW
, default:0.05
)--benchmark-red N
— deviation threshold for marking benchmark red (environment:CORRADE_BENCHMARK_RED
, default:0.25
)
Compiling and running tests
In general, just compiling the executable and linking it to the TestSuite library is enough, no further setup is needed. When running, the test produces output to standard output / standard error and exits with non-zero code in case of a test failure.
Using CMake
If you are using CMake, there's a convenience corrade_Corrade::TestSuite
library to it and adds it to CTest. Besides that it is able to link other arbitrary libraries to the executable and specify a list of files that the tests used. It provides additional useful features on various platforms:
- If compiling for Emscripten, using corrade_
add_ test() makes CTest run the resulting *.js
file via Node.js. Also it is able to bundle all files specified inFILES
into the virtual Emscripten filesystem, making it easy to run file-based tests on this platform; all environment options are passed through as well. The macro also creates a runner for manual testing in a browser, see below for more information. - If Xcode projects are generated via CMake and CORRADE_
TESTSUITE_ TARGET_ XCTEST is enabled, corrade_ add_ test() makes the test executables in a way compatible with XCTest, making it easy to run them directly from Xcode. Running the tests via ctest
will also use XCTest. - If building for Android, using corrade_
add_ test() will make CTest upload the test executables and all files specified in FILES
onto the device or emulator viaadb
, run it there with all environment options passed through as well and transfers test results back to the host.
Example of using the corrade_*.jpg
files will be available on desktop, Emscripten and Android in path specified in JPEG_TEST_DIR
that was saved into the configure.h
file inside current build directory:
if(CORRADE_TARGET_EMSCRIPTEN OR CORRADE_TARGET_ANDROID) set(JPEG_TEST_DIR ".") else() set(JPEG_TEST_DIR ${CMAKE_CURRENT_SOURCE_DIR}) endif() # Contains just # #define JPEG_TEST_DIR "${JPEG_TEST_DIR}" configure_file(${CMAKE_CURRENT_SOURCE_DIR}/configure.h.cmake ${CMAKE_CURRENT_BINARY_DIR}/configure.h) corrade_add_test(JpegTest JpegTest.cpp LIBRARIES ${JPEG_LIBRARIES} FILES rgb.jpg rgba.jpg grayscale.jpg) target_include_directories(JpegTest ${CMAKE_CURRENT_BINARY_DIR})
Manually running the tests on Android
If not using CMake CTest, Android tests can be run manually. When you have developer-enabled Android device connected or Android emulator running, you can use ADB to upload the built test to device temp directory and run it there:
adb push <path-to-the-test-build>/MyTest /data/local/tmp adb shell /data/local/tmp/MyTest
You can also use adb shell
to log directly into the device shell and continue from there. All command-line arguments are supported.
Manually running the tests on Emscripten
When not using CMake CTest, Emscripten tests can be run directly using Node.js. Emscripten sideloads the WebAssembly or asm.js binary files from current working directory, so it's needed to cd
into the test build directory first:
cd <test-build-directory>
node MyTest.js
See also the --embed-files
emcc option for a possibility to bundle test files with the executable.
Running Emscripten tests in a browser
Besides running tests using Node.js, it's possible to run each test case manually in a browser. Browsers require the executables to be accessed via a webserver — if you have Python installed, you can simply start serving the contents of your build directory using the following command:
cd <test-build-directory>
python -m http.server
The webserver is then available at http:/MyTest.html
). Unfortunately it's at the moment not possible to run all browser tests in a batch or automate the process in any other way.
Public types
- class TesterConfiguration
- Tester configuration.
- enum class BenchmarkType { Default = 1, WallTime = 2, WallClock = int(WallTime) deprecated, CpuTime = 3, CpuCycles = 4 }
- Benchmark type.
- enum class BenchmarkUnits { Nanoseconds = 100, Time = int(Nanoseconds) deprecated, Cycles = 101, Instructions = 102, Bytes = 103, Memory = int(Bytes) deprecated, Count = 104 }
- Custom benchmark units.
-
using Debug = Corrade::
Utility:: Debug - Alias for debug output.
-
using Warning = Corrade::
Utility:: Warning - Alias for warning output.
-
using Error = Corrade::
Utility:: Error - Alias for error output.
Constructors, destructors, conversion operators
- Tester(TesterConfiguration configuration = TesterConfiguration{}) explicit
- Constructor.
Public functions
-
auto arguments() -> std::
pair<int&, char**> - Command-line arguments.
-
template<class Derived>void addTests(std::
initializer_list<void(Derived::*)()> tests) - Add test cases.
-
template<class Derived>void addRepeatedTests(std::
initializer_list<void(Derived::*)()> tests, std:: size_t repeatCount) - Add repeated test cases.
-
template<class Derived>void addTests(std::
initializer_list<void(Derived::*)()> tests, void(Derived::*)() setup, void(Derived::*)() teardown) - Add test cases with explicit setup and teardown functions.
-
template<class Derived>void addRepeatedTests(std::
initializer_list<void(Derived::*)()> tests, std:: size_t repeatCount, void(Derived::*)() setup, void(Derived::*)() teardown) - Add repeated test cases with explicit setup and teardown functions.
-
template<class Derived>void addInstancedTests(std::
initializer_list<void(Derived::*)()> tests, std:: size_t instanceCount) - Add instanced test cases.
-
template<class Derived>void addRepeatedInstancedTests(std::
initializer_list<void(Derived::*)()> tests, std:: size_t repeatCount, std:: size_t instanceCount) - Add repeated instanced test cases.
-
template<class Derived>void addInstancedTests(std::
initializer_list<void(Derived::*)()> tests, std:: size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown) - Add instanced test cases with explicit setup and teardown functions.
-
template<class Derived>void addRepeatedInstancedTests(std::
initializer_list<void(Derived::*)()> tests, std:: size_t repeatCount, std:: size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown) - Add repeated instanced test cases with explicit setup and teardown functions.
-
template<class Derived>void addBenchmarks(std::
initializer_list<void(Derived::*)()> benchmarks, std:: size_t batchCount, BenchmarkType benchmarkType = BenchmarkType:: Default) - Add benchmarks.
-
template<class Derived>void addBenchmarks(std::
initializer_list<void(Derived::*)()> benchmarks, std:: size_t batchCount, void(Derived::*)() setup, void(Derived::*)() teardown, BenchmarkType benchmarkType = BenchmarkType:: Default) - Add benchmarks with explicit setup and teardown functions.
-
template<class Derived>void addCustomBenchmarks(std::
initializer_list<void(Derived::*)()> benchmarks, std:: size_t batchCount, void(Derived::*)() benchmarkBegin, std:: uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits) - Add custom benchmarks.
-
template<class Derived>void addCustomBenchmarks(std::
initializer_list<void(Derived::*)()> benchmarks, std:: size_t batchCount, void(Derived::*)() setup, void(Derived::*)() teardown, void(Derived::*)() benchmarkBegin, std:: uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits) - Add custom benchmarks with explicit setup and teardown functions.
-
template<class Derived>void addInstancedBenchmarks(std::
initializer_list<void(Derived::*)()> benchmarks, std:: size_t batchCount, std:: size_t instanceCount, BenchmarkType benchmarkType = BenchmarkType:: Default) - Add instanced benchmarks.
-
template<class Derived>void addInstancedBenchmarks(std::
initializer_list<void(Derived::*)()> benchmarks, std:: size_t batchCount, std:: size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown, BenchmarkType benchmarkType = BenchmarkType:: Default) - Add instanced benchmarks with explicit setup and teardown functions.
-
template<class Derived>void addCustomInstancedBenchmarks(std::
initializer_list<void(Derived::*)()> benchmarks, std:: size_t batchCount, std:: size_t instanceCount, void(Derived::*)() benchmarkBegin, std:: uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits) - Add custom instanced benchmarks.
-
template<class Derived>void addCustomInstancedBenchmarks(std::
initializer_list<void(Derived::*)()> benchmarks, std:: size_t batchCount, std:: size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown, void(Derived::*)() benchmarkBegin, std:: uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits) - Add custom instanced benchmarks with explicit setup and teardown functions.
-
auto testCaseId() const -> std::
size_t - Test case ID.
-
auto testCaseInstanceId() const -> std::
size_t - Test case instance ID.
-
auto testCaseRepeatId() const -> std::
size_t - Test case repeat ID.
-
void setTestName(const std::
string& name) - Set custom test name.
-
void setTestName(std::
string&& name) -
void setTestCaseName(const std::
string& name) - Set custom test case name.
-
void setTestCaseName(std::
string&& name) -
void setTestCaseDescription(const std::
string& description) - Set test case description.
-
void setTestCaseDescription(std::
string&& description) -
void setBenchmarkName(const std::
string& name) - Set benchmark name.
-
void setBenchmarkName(std::
string&& name)
Enum documentation
enum class Corrade:: TestSuite:: Tester:: BenchmarkType
Benchmark type.
Enumerators | |
---|---|
Default |
Default. Equivalent to BenchmarkType:: |
WallTime |
Wall time. Suitable for measuring events in microseconds and up. While the reported time is in nanoseconds, the actual timer granularity may differ from platform to platform. To measure shorter events, increase number of iterations passed to CORRADE_ |
WallClock | |
CpuTime |
CPU time. Suitable for measuring most events (microseconds and up). While the reported time is in nanoseconds, the actual timer granularity may differ from platform to platform (for example on Windows the CPU clock is reported in multiples of 100 ns). To measure shorter events, increase number of iterations passed to CORRADE_ |
CpuCycles |
CPU cycle count. Suitable for measuring sub-millisecond events, but note that on newer architectures the cycle counter frequency is constant and thus measured value is independent on CPU frequency, so it in fact measures time and not the actual cycles spent. See for example https:/ |
enum class Corrade:: TestSuite:: Tester:: BenchmarkUnits
Custom benchmark units.
Unit of measurements outputted from custom benchmarks.
Enumerators | |
---|---|
Nanoseconds |
Time in nanoseconds |
Time | |
Cycles |
Processor cycle count |
Instructions |
Processor instruction count |
Bytes |
Memory (in bytes) |
Memory | |
Count |
Generic count |
Typedef documentation
typedef Corrade:: Utility:: Debug Corrade:: TestSuite:: Tester:: Debug
Alias for debug output.
For convenient debug output inside test cases (instead of using fully qualified name):
void myTestCase() { int a = 4; Debug() << a; CORRADE_COMPARE(a + a, 8); }
typedef Corrade:: Utility:: Warning Corrade:: TestSuite:: Tester:: Warning
Alias for warning output.
See Debug for more information.
typedef Corrade:: Utility:: Error Corrade:: TestSuite:: Tester:: Error
Alias for error output.
See Debug for more information.
Function documentation
Corrade:: TestSuite:: Tester:: Tester(TesterConfiguration configuration = TesterConfiguration{}) explicit
Constructor.
Parameters | |
---|---|
configuration | Optional configuration |
std:: pair<int&, char**> Corrade:: TestSuite:: Tester:: arguments()
Command-line arguments.
Populated by CORRADE_
template<class Derived>
void Corrade:: TestSuite:: Tester:: addTests(std:: initializer_list<void(Derived::*)()> tests)
Add test cases.
Adds one or more test cases to be executed. It's not an error to call this function multiple times or add one test case more than once.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addRepeatedTests(std:: initializer_list<void(Derived::*)()> tests,
std:: size_t repeatCount)
Add repeated test cases.
Unlike the above function repeats each of the test cases until it fails or repeatCount
is reached. Useful for stability or resource leak checking. Each test case appears in the output log only once. It's not an error to call this function multiple times or add a particular test case more than once — in that case it will appear in the output log once for each occurence in the list.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addTests(std:: initializer_list<void(Derived::*)()> tests,
void(Derived::*)() setup,
void(Derived::*)() teardown)
Add test cases with explicit setup and teardown functions.
Parameters | |
---|---|
tests | List of test cases to run |
setup | Setup function |
teardown | Teardown function |
In addition to the behavior of addTests() above, the setup
function is called before every test case in the list and the teardown
function is called after every test case in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup
or teardown
function is not allowed. It's not an error to call this function multiple times or add one test case more than once.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addRepeatedTests(std:: initializer_list<void(Derived::*)()> tests,
std:: size_t repeatCount,
void(Derived::*)() setup,
void(Derived::*)() teardown)
Add repeated test cases with explicit setup and teardown functions.
Unlike the above function repeats each of the test cases until it fails or repeatCount
is reached. Useful for stability or resource leak checking. The setup
and teardown
functions are called again for each repeat of each test case. Each test case appears in the output log only once. It's not an error to call this function multiple times or add a particular test case more than once — in that case it will appear in the output log once for each occurence in the list.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addInstancedTests(std:: initializer_list<void(Derived::*)()> tests,
std:: size_t instanceCount)
Add instanced test cases.
Unlike addTests(), this function runs each of the test cases instanceCount
times. Useful for data-driven tests. Each test case appears in the output once for each instance. It's not an error to call this function multiple times or add one test case more than once — in that case it will appear once for each instance of each occurence in the list.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addRepeatedInstancedTests(std:: initializer_list<void(Derived::*)()> tests,
std:: size_t repeatCount,
std:: size_t instanceCount)
Add repeated instanced test cases.
Unlike the above function repeats each of the test case instances until it fails or repeatCount
is reached. Useful for stability or resource leak checking. Each test case appears in the output once for each instance. It's not an error to call this function multiple times or add one test case more than once — in that case it will appear once for each instance of each occurence in the list.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addInstancedTests(std:: initializer_list<void(Derived::*)()> tests,
std:: size_t instanceCount,
void(Derived::*)() setup,
void(Derived::*)() teardown)
Add instanced test cases with explicit setup and teardown functions.
Parameters | |
---|---|
tests | List of test cases to run |
instanceCount | Instance count |
setup | Setup function |
teardown | Teardown function |
In addition to the behavior of addInstancedTests() above, the setup
function is called before every instance of every test case in the list and the teardown
function is called after every instance of every test case in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup
or teardown
function is not allowed. It's not an error to call this function multiple times or add one test case more than once — in that case it will appear once for each instance of each occurence in the list.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addRepeatedInstancedTests(std:: initializer_list<void(Derived::*)()> tests,
std:: size_t repeatCount,
std:: size_t instanceCount,
void(Derived::*)() setup,
void(Derived::*)() teardown)
Add repeated instanced test cases with explicit setup and teardown functions.
Unlike the above function repeats each of the test case instances until it fails or repeatCount
is reached. Useful for stability or resource leak checking. The setup
and teardown
functions are called again for each repeat of each instance of each test case. The test case appears in the output once for each instance. It's not an error to call this function multiple times or add one test case more than once — in that case it will appear once for each instance of each occurence in the list.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addBenchmarks(std:: initializer_list<void(Derived::*)()> benchmarks,
std:: size_t batchCount,
BenchmarkType benchmarkType = BenchmarkType:: Default)
Add benchmarks.
Parameters | |
---|---|
benchmarks | List of benchmarks to run |
batchCount | Batch count |
benchmarkType | Benchmark type |
For each added benchmark measures the time spent executing code inside a statement or block denoted by CORRADE_batchCount
parameter specifies how many batches will be run to make the measurement more precise, while the batch size parameter passed to CORRADE_
template<class Derived>
void Corrade:: TestSuite:: Tester:: addBenchmarks(std:: initializer_list<void(Derived::*)()> benchmarks,
std:: size_t batchCount,
void(Derived::*)() setup,
void(Derived::*)() teardown,
BenchmarkType benchmarkType = BenchmarkType:: Default)
Add benchmarks with explicit setup and teardown functions.
Parameters | |
---|---|
benchmarks | List of benchmarks to run |
batchCount | Batch count |
setup | Setup function |
teardown | Teardown function |
benchmarkType | Benchmark type |
In addition to the behavior of addBenchmarks() above, the setup
function is called before every batch of every benchmark in the list and the teardown
function is called after every batch of every benchmark in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup
or teardown
function is not allowed. It's not an error to call this function multiple times or add one benchmark more than once.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addCustomBenchmarks(std:: initializer_list<void(Derived::*)()> benchmarks,
std:: size_t batchCount,
void(Derived::*)() benchmarkBegin,
std:: uint64_t(Derived::*)() benchmarkEnd,
BenchmarkUnits benchmarkUnits)
Add custom benchmarks.
Parameters | |
---|---|
benchmarks | List of benchmarks to run |
batchCount | Batch count |
benchmarkBegin | Benchmark begin function |
benchmarkEnd | Benchmark end function |
benchmarkUnits | Benchmark units |
Unlike the above functions uses user-supplied measurement functions. The benchmarkBegin
parameter starts the measurement, the benchmarkEnd
parameter ends the measurement and returns measured value, which is in units
. It's not an error to call this function multiple times or add one benchmark more than once.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addCustomBenchmarks(std:: initializer_list<void(Derived::*)()> benchmarks,
std:: size_t batchCount,
void(Derived::*)() setup,
void(Derived::*)() teardown,
void(Derived::*)() benchmarkBegin,
std:: uint64_t(Derived::*)() benchmarkEnd,
BenchmarkUnits benchmarkUnits)
Add custom benchmarks with explicit setup and teardown functions.
Parameters | |
---|---|
benchmarks | List of benchmarks to run |
batchCount | Batch count |
setup | Setup function |
teardown | Teardown function |
benchmarkBegin | Benchmark begin function |
benchmarkEnd | Benchmark end function |
benchmarkUnits | Benchmark units |
In addition to the behavior of addCustomBenchmarks() above, the setup
function is called before every batch of every benchmark in the list and the teardown
function is called after every batch of every benchmark in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup
or teardown
function is not allowed. It's not an error to call this function multiple times or add one benchmark more than once.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addInstancedBenchmarks(std:: initializer_list<void(Derived::*)()> benchmarks,
std:: size_t batchCount,
std:: size_t instanceCount,
BenchmarkType benchmarkType = BenchmarkType:: Default)
Add instanced benchmarks.
Parameters | |
---|---|
benchmarks | List of benchmarks to run |
batchCount | Batch count |
instanceCount | Instance count |
benchmarkType | Benchmark type |
Unlike addBenchmarks(), this function runs each of the benchmarks instanceCount
times. Useful for data-driven tests. Each test case appears in the output once for each instance. It's not an error to call this function multiple times or add one benchmark more than once — in that case it will appear once for each instance of each occurence in the list.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addInstancedBenchmarks(std:: initializer_list<void(Derived::*)()> benchmarks,
std:: size_t batchCount,
std:: size_t instanceCount,
void(Derived::*)() setup,
void(Derived::*)() teardown,
BenchmarkType benchmarkType = BenchmarkType:: Default)
Add instanced benchmarks with explicit setup and teardown functions.
Parameters | |
---|---|
benchmarks | List of benchmarks to run |
batchCount | Batch count |
instanceCount | Instance count |
setup | Setup function |
teardown | Teardown function |
benchmarkType | Benchmark type |
In addition to the behavior of above function, the setup
function is called before every instance of every batch of every benchmark in the list and the teardown
function is called after every instance of every batch of every benchmark in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup
or teardown
function is not allowed. It's not an error to call this function multiple times or add one benchmark more than once — in that case it will appear once for each instance of each occurence in the list.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addCustomInstancedBenchmarks(std:: initializer_list<void(Derived::*)()> benchmarks,
std:: size_t batchCount,
std:: size_t instanceCount,
void(Derived::*)() benchmarkBegin,
std:: uint64_t(Derived::*)() benchmarkEnd,
BenchmarkUnits benchmarkUnits)
Add custom instanced benchmarks.
Parameters | |
---|---|
benchmarks | List of benchmarks to run |
batchCount | Batch count |
instanceCount | Instance count |
benchmarkBegin | Benchmark begin function |
benchmarkEnd | Benchmark end function |
benchmarkUnits | Benchmark units |
Unlike the above functions uses user-supplied measurement functions. The benchmarkBegin
parameter starts the measurement, the benchmarkEnd
parameter ends the measurement and returns measured value, which is in units
. It's not an error to call this function multiple times or add one benchmark more than once — in that case it will appear once for each instance of each occurence in the list.
template<class Derived>
void Corrade:: TestSuite:: Tester:: addCustomInstancedBenchmarks(std:: initializer_list<void(Derived::*)()> benchmarks,
std:: size_t batchCount,
std:: size_t instanceCount,
void(Derived::*)() setup,
void(Derived::*)() teardown,
void(Derived::*)() benchmarkBegin,
std:: uint64_t(Derived::*)() benchmarkEnd,
BenchmarkUnits benchmarkUnits)
Add custom instanced benchmarks with explicit setup and teardown functions.
Parameters | |
---|---|
benchmarks | List of benchmarks to run |
batchCount | Batch count |
instanceCount | Batch count |
setup | Setup function |
teardown | Teardown function |
benchmarkBegin | Benchmark begin function |
benchmarkEnd | Benchmark end function |
benchmarkUnits | Benchmark units |
In addition to the behavior of addCustomBenchmarks() above, the setup
function is called before every batch of every benchmark in the list and the teardown
function is called after every batch of every benchmark in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup
or teardown
function is not allowed. It's not an error to call this function multiple times or add one benchmark more than once — in that case it will appear once for each instance of each occurence in the list.
std:: size_t Corrade:: TestSuite:: Tester:: testCaseId() const
Test case ID.
Returns ID of the test case that is currently executing, starting from 1
. Value is undefined if called outside of test cases and setup/teardown functions.
std:: size_t Corrade:: TestSuite:: Tester:: testCaseInstanceId() const
Test case instance ID.
Returns instance ID of the instanced test case that is currently executing, starting from 0
. Value is undefined if called outside of instanced test cases and setup/teardown functions.
std:: size_t Corrade:: TestSuite:: Tester:: testCaseRepeatId() const
Test case repeat ID.
Returns repeat ID of the repeated test case that is currently executing, starting from 0
. Value is undefined if called outside of repeated test cases and setup/teardown functions.
void Corrade:: TestSuite:: Tester:: setTestName(const std:: string& name)
Set custom test name.
By default the test name is gathered together with test filename by the CORRADE_
void Corrade:: TestSuite:: Tester:: setTestName(std:: string&& name)
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
void Corrade:: TestSuite:: Tester:: setTestCaseName(const std:: string& name)
Set custom test case name.
By default the test case name is gathered in the check macros and is equivalent to the following:
setTestCaseName(__func__);
void Corrade:: TestSuite:: Tester:: setTestCaseName(std:: string&& name)
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
void Corrade:: TestSuite:: Tester:: setTestCaseDescription(const std:: string& description)
Set test case description.
Additional text displayed after the test case name. By default the description is empty for non-instanced test cases and instance ID for instanced test cases.
void Corrade:: TestSuite:: Tester:: setTestCaseDescription(std:: string&& description)
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
void Corrade:: TestSuite:: Tester:: setBenchmarkName(const std:: string& name)
Set benchmark name.
In case of addCustomBenchmarks() and addCustomInstancedBenchmarks() provides the name for the unit measured, for example "wall time"
.
void Corrade:: TestSuite:: Tester:: setBenchmarkName(std:: string&& name)
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.