Test Framework
The Zephyr Test Framework (Ztest) provides a simple testing framework intended to be used during development. It provides basic assertion macros and a generic test structure.
The framework can be used in two ways, either as a generic framework for integration testing, or for unit testing specific modules.
Creating a test suite
Using Ztest to create a test suite is as easy as calling the ZTEST_SUITE
. The macro
accepts the following arguments:
suite_name
- The name of the suite. This name must be unique within a single binary.ztest_suite_predicate_t
- An optional predicate function to allow choosing when the test will run. The predicate will get a pointer to the global state passed in throughztest_run_all()
and should return a boolean to decide if the suite should run.ztest_suite_setup_t
- An optional setup function which returns a test fixture. This will be called and run once per test suite run.ztest_suite_before_t
- An optional before function which will run before every single test in this suite.ztest_suite_after_t
- An optional after function which will run after every single test in this suite.ztest_suite_teardown_t
- An optional teardown function which will run at the end of all the tests in the suite.
Below is an example of a test suite using a predicate:
#include <zephyr/ztest.h>
#include "test_state.h"
static bool predicate(const void *global_state)
{
return ((const struct test_state*)global_state)->x == 5;
}
ZTEST_SUITE(alternating_suite, predicate, NULL, NULL, NULL, NULL);
Adding tests to a suite
There are 4 macros used to add a test to a suite, they are:
ZTEST
(suite_name, test_name)
- Which can be used to add a test bytest_name
to a given suite bysuite_name
.ZTEST_USER
(suite_name, test_name)
- Which behaves the same asZTEST
, only that whenCONFIG_USERSPACE
is enabled, then the test will be run in a userspace thread.ZTEST_F
(suite_name, test_name)
- Which behaves the same asZTEST
, only that the test function will already include a variable namedfixture
with the type<suite_name>_fixture
.ZTEST_USER_F
(suite_name, test_name)
- Which combines the fixture feature ofZTEST_F
with the userspace threading for the test.
Test fixtures
Test fixtures can be used to help simplify repeated test setup operations. In many cases, tests in the same suite will require some initial setup followed by some form of reset between each test. This is achieved via fixtures in the following way:
#include <zephyr/ztest.h>
struct my_suite_fixture {
size_t max_size;
size_t size;
uint8_t buff[1];
};
static void *my_suite_setup(void)
{
/* Allocate the fixture with 256 byte buffer */
struct my_suite_fixture *fixture = malloc(sizeof(struct my_suite_fixture) + 255);
zassume_not_null(fixture, NULL);
fixture->max_size = 256;
return fixture;
}
static void my_suite_before(void *f)
{
struct my_suite_fixture *fixture = (struct my_suite_fixture *)f;
memset(fixture->buff, 0, fixture->max_size);
fixture->size = 0;
}
static void my_suite_teardown(void *f)
{
free(f);
}
ZTEST_SUITE(my_suite, NULL, my_suite_setup, my_suite_before, NULL, my_suite_teardown);
ZTEST_F(my_suite, test_feature_x)
{
zassert_equal(0, fixture->size);
zassert_equal(256, fixture->max_size);
}
Using memory allocated by a test fixture in a userspace thread, such as during execution of
ZTEST_USER
or ZTEST_USER_F
, requires that memory to be declared userspace
accessible. This is because the fixture memory is owned and initialized by kernel space. The Ztest
framework provides the ZTEST_DMEM
and ZTEST_BMEM
macros for use of such
user/kernel space shared memory.
Advanced features
Test result expectations
Some tests were made to be broken. In cases where the test is expected to fail or skip due to the nature of the code, it’s possible to annotate the test as such. For example:
#include <zephyr/ztest.h> ZTEST_SUITE(my_suite, NULL, NULL, NULL, NULL, NULL); ZTEST_EXPECT_FAIL(my_suite, test_fail); ZTEST(my_suite, test_fail) { /** This will fail the test */ zassert_true(false, NULL); } ZTEST_EXPECT_SKIP(my_suite, test_skip); ZTEST(my_suite, test_skip) { /** This will skip the test */ zassume_true(false, NULL); }
In this example, the above tests should be marked as failed and skipped respectively. Instead, Ztest will mark both as passed due to the expectation.
Test rules
Test rules are a way to run the same logic for every test and every suite. There are a lot of cases where you might want to reset some state for every test in the binary (regardless of which suite is currently running). As an example, this could be to reset mocks, reset emulators, flush the UART, etc.:
#include <zephyr/fff.h>
#include <zephyr/ztest.h>
#include "test_mocks.h"
DEFINE_FFF_GLOBALS;
DEFINE_FAKE_VOID_FUN(my_weak_func);
static void fff_reset_rule_before(const struct ztest_unit_test *test, void *fixture)
{
ARG_UNUSED(test);
ARG_UNUSED(fixture);
RESET_FAKE(my_weak_func);
}
ZTEST_RULE(fff_reset_rule, fff_reset_rule_before, NULL);
A custom test_main
While the Ztest framework provides a default test_main()
function, it’s possible that some
applications will want to provide custom behavior. This is particularly true if there’s some global
state that the tests depend on and that state either cannot be replicated or is difficult to
replicate without starting the process over. For example, one such state could be a power sequence.
Assuming there’s a board with several steps in the power-on sequence a test suite can be written
using the predicate
to control when it would run. In that case, the test_main()
function can be written as follows:
#include <zephyr/ztest.h>
#include "my_test.h"
void test_main(void)
{
struct power_sequence_state state;
/* Only suites that use a predicate checking for phase == PWR_PHASE_0 will run. */
state.phase = PWR_PHASE_0;
ztest_run_all(&state, false, 1, 1);
/* Only suites that use a predicate checking for phase == PWR_PHASE_1 will run. */
state.phase = PWR_PHASE_1;
ztest_run_all(&state, false, 1, 1);
/* Only suites that use a predicate checking for phase == PWR_PHASE_2 will run. */
state.phase = PWR_PHASE_2;
ztest_run_all(&state, false, 1, 1);
/* Check that all the suites in this binary ran at least once. */
ztest_verify_all_test_suites_ran();
}
Quick start - Integration testing
A simple working base is located at samples/subsys/testsuite/integration.
To make a test application for the bar component of foo, you should copy the
sample folder to tests/foo/bar
and edit files there adjusting for your test
application’s purposes.
To build and execute all applicable test scenarios defined in your test application use the Twister tool, for example:
./scripts/twister -T tests/foo/bar/
To select just one of the test scenarios, run Twister with --scenario
command:
./scripts/twister --scenario tests/foo/bar/your.test.scenario.name
In the command line above tests/foo/bar
is the path to your test application and
your.test.scenario.name
references a test scenario defined in testcase.yaml
file, which is like sample.testing.ztest
in the boilerplate test suite sample.
See Twister test project diagram for more details on how Twister deals with Ztest application.
The sample contains the following files:
CMakeLists.txt
1# SPDX-License-Identifier: Apache-2.0
2
3cmake_minimum_required(VERSION 3.20.0)
4find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})
5project(integration)
6
7FILE(GLOB app_sources src/*.c)
8target_sources(app PRIVATE ${app_sources})
testcase.yaml
1tests:
2 # section.subsection
3 sample.testing.ztest:
4 build_only: true
5 platform_allow:
6 - native_posix
7 - native_sim
8 integration_platforms:
9 - native_sim
10 tags: test_framework
prj.conf
1CONFIG_ZTEST=y
src/main.c (see best practices)
1/*
2 * Copyright (c) 2016 Intel Corporation
3 *
4 * SPDX-License-Identifier: Apache-2.0
5 */
6
7#include <zephyr/ztest.h>
8
9
10ZTEST_SUITE(framework_tests, NULL, NULL, NULL, NULL, NULL);
11
12/**
13 * @brief Test Asserts
14 *
15 * This test verifies various assert macros provided by ztest.
16 *
17 */
18ZTEST(framework_tests, test_assert)
19{
20 zassert_true(1, "1 was false");
21 zassert_false(0, "0 was true");
22 zassert_is_null(NULL, "NULL was not NULL");
23 zassert_not_null("foo", "\"foo\" was NULL");
24 zassert_equal(1, 1, "1 was not equal to 1");
25 zassert_equal_ptr(NULL, NULL, "NULL was not equal to NULL");
26}
A test application may consist of multiple test suites that either can be testing functionality or APIs. Functions implementing a test case should follow the guidelines below:
Test cases function names should be prefixed with test_
Test cases should be documented using doxygen
Test case function names should be unique within the section or component being tested
For example:
/**
* @brief Test Asserts
*
* This test case verifies the zassert_true macro.
*/
ZTEST(my_suite, test_assert)
{
zassert_true(1, "1 was false");
}
Listing Tests
Tests (test applications) in the Zephyr tree consist of many test scenarios that run as
part of a project and test similar functionality, for example an API or a
feature. The twister
script can parse the test scenarios, suites and cases in all
test applications or a subset of them, and can generate reports on a granular
level, i.e. if test cases have passed or failed or if they were blocked or skipped.
Twister parses the source files looking for test case names, so you can list all kernel test cases, for example, by running:
./scripts/twister --list-tests -T tests/kernel
Skipping Tests
Special- or architecture-specific tests cannot run on all
platforms and architectures, however we still want to count those and
report them as being skipped. Because the test inventory and
the list of tests is extracted from the code, adding
conditionals inside the test suite is sub-optimal. Tests that need
to be skipped for a certain platform or feature need to explicitly
report a skip using ztest_test_skip()
or Z_TEST_SKIP_IFDEF
. If the test runs,
it needs to report either a pass or fail. For example:
#ifdef CONFIG_TEST1
ZTEST(common, test_test1)
{
zassert_true(1, "true");
}
#else
ZTEST(common, test_test1)
{
ztest_test_skip();
}
#endif
ZTEST(common, test_test2)
{
Z_TEST_SKIP_IFDEF(CONFIG_BUGxxxxx);
zassert_equal(1, 0, NULL);
}
ZTEST_SUITE(common, NULL, NULL, NULL, NULL, NULL);
Quick start - Unit testing
Ztest can be used for unit testing. This means that rather than including the entire Zephyr OS for testing a single function, you can focus the testing efforts into the specific module in question. This will speed up testing since only the module will have to be compiled in, and the tested functions will be called directly.
Examples of unit tests can be found in the tests/unit/ folder.
In order to declare the unit tests present in a source folder, you need to add
the relevant source files to the testbinary
target from the CMake
unittest component. See a minimal
example below:
cmake_minimum_required(VERSION 3.20.0)
project(app)
find_package(Zephyr COMPONENTS unittest REQUIRED HINTS $ENV{ZEPHYR_BASE})
target_sources(testbinary PRIVATE main.c)
Since you won’t be including basic kernel data structures that most code depends on, you have to provide function stubs in the test. Ztest provides some helpers for mocking functions, as demonstrated below.
In a unit test, mock objects can simulate the behavior of complex real objects and are used to decide whether a test failed or passed by verifying whether an interaction with an object occurred, and if required, to assert the order of that interaction.
Best practices for declaring the test suite
twister and other validation tools need to obtain the list of test cases that a Zephyr ztest test image will expose.
Rationale
This all is for the purpose of traceability. It’s not enough to have only a semaphore test application. We also need to show that we have testpoints for all APIs and functionality, and we trace back to documentation of the API, and functional requirements.
The idea is that test reports show results for every test case as passed, failed, blocked, or skipped. Reporting on only the high-level test application, particularly when tests do too many things, is too vague.
Other questions:
Why not pre-scan with CPP and then parse? or post scan the ELF file?
If C pre-processing or building fails because of any issue, then we won’t be able to tell the subcases.
Why not declare them in the YAML test configuration?
A separate test case description file would be harder to maintain than just keeping the information in the test source files themselves – only one file to update when changes are made eliminates duplication.
Stress test framework
Zephyr stress test framework (Ztress) provides an environment for executing user functions in multiple priority contexts. It can be used to validate that code is resilient to preemptions. The framework tracks the number of executions and preemptions for each context. Execution can have various completion conditions like timeout, number of executions or number of preemptions.
The framework is setting up the environment by creating the requested number of threads (each on different priority), optionally starting a timer. For each context, a user function (different for each context) is called and then the context sleeps for a randomized amount of system ticks. The framework is tracking CPU load and adjusts sleeping periods to achieve higher CPU load. In order to increase the probability of preemptions, the system clock frequency should be relatively high. The default 100 Hz on QEMU x86 is much too low and it is recommended to increase it to 100 kHz.
The stress test environment is setup and executed using ZTRESS_EXECUTE
which
accepts a variable number of arguments. Each argument is a context that is
specified by ZTRESS_TIMER
or ZTRESS_THREAD
macros. Contexts
are specified in priority descending order. Each context specifies completion
conditions by providing the minimum number of executions and preemptions. When all
conditions are met and the execution has completed, an execution report is printed
and the macro returns. Note that while the test is executing, a progress report is
periodically printed.
Execution can be prematurely completed by specifying a test timeout (ztress_set_timeout()
)
or an explicit abort (ztress_abort()
).
User function parameters contains an execution counter and a flag indicating if it is the last execution.
The example below presents how to setup and run 3 contexts (one of which is k_timer interrupt handler context). Completion criteria is set to at least 10000 executions of each context and 1000 preemptions of the lowest priority context. Additionally, the timeout is configured to complete after 10 seconds if those conditions are not met. The last argument of each context is the initial sleep time which will be adjusted throughout the test to achieve the highest CPU load.
ztress_set_timeout(K_MSEC(10000)); ZTRESS_EXECUTE(ZTRESS_TIMER(foo_0, user_data_0, 10000, Z_TIMEOUT_TICKS(20)), ZTRESS_THREAD(foo_1, user_data_1, 10000, 0, Z_TIMEOUT_TICKS(20)), ZTRESS_THREAD(foo_2, user_data_2, 10000, 1000, Z_TIMEOUT_TICKS(20)));
Configuration
Static configuration of Ztress contains:
CONFIG_ZTRESS_MAX_THREADS
- number of supported threads.
CONFIG_ZTRESS_STACK_SIZE
- Stack size of created threads.
CONFIG_ZTRESS_REPORT_PROGRESS_MS
- Test progress report interval.
API reference
Running tests
- group ztest_test
This module eases the testing process by providing helpful macros and other testing structures.
Defines
-
ZTEST_EXPECT_FAIL(_suite_name, _test_name)
Expect a test to fail (mark it passing if it failed)
Adding this macro to your logic will allow the failing test to be considered passing, example:
ZTEST_EXPECT_FAIL(my_suite, test_x); ZTEST(my_suite, text_x) { zassert_true(false, NULL); }
- Parameters:
_suite_name – The name of the suite
_test_name – The name of the test
-
ZTEST_EXPECT_SKIP(_suite_name, _test_name)
Expect a test to skip (mark it passing if it failed)
Adding this macro to your logic will allow the failing test to be considered passing, example:
ZTEST_EXPECT_SKIP(my_suite, test_x); ZTEST(my_suite, text_x) { zassume_true(false, NULL); }
- Parameters:
_suite_name – The name of the suite
_test_name – The name of the test
-
ZTEST_TEST_COUNT
Number of registered unit tests.
-
ZTEST_SUITE_COUNT
Number of registered test suites.
-
ZTEST_SUITE(SUITE_NAME, PREDICATE, setup_fn, before_fn, after_fn, teardown_fn)
Create and register a ztest suite.
Using this macro creates a new test suite. It then creates a struct ztest_suite_node in a specific linker section.
Tests can then be run by calling ztest_run_test_suites(const void *state) by passing in the current state. See the documentation for ztest_run_test_suites for more info.
- Parameters:
SUITE_NAME – The name of the suite
PREDICATE – A function to test against the state and determine if the test should run.
setup_fn – The setup function to call before running this test suite
before_fn – The function to call before each unit test in this suite
after_fn – The function to call after each unit test in this suite
teardown_fn – The function to call after running all the tests in this suite
-
ZTEST_DMEM
Make data section used by Ztest userspace accessible.
-
ZTEST_BMEM
Make bss section used by Ztest userspace accessible.
-
ZTEST_SECTION
Ztest data section for accessing data from userspace.
-
ZTEST(suite, fn)
Create and register a new unit test.
Calling this macro will create a new unit test and attach it to the declared
suite
. Thesuite
does not need to be defined in the same compilation unit.- Parameters:
suite – The name of the test suite to attach this test
fn – The test function to call.
-
ZTEST_USER(suite, fn)
Define a test function that should run as a user thread.
This macro behaves exactly the same as ZTEST, but calls the test function in user space if
CONFIG_USERSPACE
was enabled.- Parameters:
suite – The name of the test suite to attach this test
fn – The test function to call.
-
ZTEST_F(suite, fn)
Define a test function.
This macro behaves exactly the same as ZTEST(), but the function takes an argument for the fixture of type
struct suite##_fixture*
namedfixture
.- Parameters:
suite – The name of the test suite to attach this test
fn – The test function to call.
-
ZTEST_USER_F(suite, fn)
Define a test function that should run as a user thread.
If CONFIG_USERSPACE is not enabled, this is functionally identical to ZTEST_F(). The test function takes a single fixture argument of type
struct suite##_fixture*
namedfixture
.- Parameters:
suite – The name of the test suite to attach this test
fn – The test function to call.
-
ZTEST_RULE(name, before_each_fn, after_each_fn)
Define a test rule that will run before/after each unit test.
Functions defined here will run before/after each unit test for every test suite. Along with the callback, the test functions are provided a pointer to the test being run, and the data. This provides a mechanism for tests to perform custom operations depending on the specific test or the data (for example logging may use the test’s name).
Ordering:
Test rule’s
before
function will run before the suite’sbefore
function. This is done to allow the test suite’s customization to take precedence over the rule which is applied to all suites.Test rule’s
after
function is not guaranteed to run in any particular order.
- Parameters:
name – The name for the test rule (must be unique within the compilation unit)
before_each_fn – The callback function (ztest_rule_cb) to call before each test (may be NULL)
after_each_fn – The callback function (ztest_rule_cb) to call after each test (may be NULL)
-
ztest_run_test_suite(suite, shuffle, suite_iter, case_iter)
Run the specified test suite.
- Parameters:
suite – Test suite to run.
shuffle – Shuffle tests
suite_iter – Test suite repetitions.
case_iter – Test case repetitions.
Typedefs
-
typedef void *(*ztest_suite_setup_t)(void)
Setup function to run before running this suite.
- Return:
Pointer to the data structure that will be used throughout this test suite
-
typedef void (*ztest_suite_before_t)(void *fixture)
Function to run before each test in this suite.
- Param fixture:
The test suite’s fixture returned from setup()
-
typedef void (*ztest_suite_after_t)(void *fixture)
Function to run after each test in this suite.
- Param fixture:
The test suite’s fixture returned from setup()
-
typedef void (*ztest_suite_teardown_t)(void *fixture)
Teardown function to run after running this suite.
- Param fixture:
The test suite’s data returned from setup()
-
typedef bool (*ztest_suite_predicate_t)(const void *global_state)
An optional predicate function to determine if the test should run.
If NULL, then the test will only run once on the first attempt.
- Param global_state:
The current state of the test application.
- Return:
True if the suite should be run; false to skip.
-
typedef void (*ztest_rule_cb)(const struct ztest_unit_test *test, void *data)
Test rule callback function signature.
The function signature that can be used to register a test rule’s before/after callback. This provides access to the test and the fixture data (if provided).
- Param test:
Pointer to the unit test in context
- Param data:
Pointer to the test’s fixture data (may be NULL)
Enums
-
enum ztest_expected_result
The expected result of a test.
See also
See also
Values:
-
enumerator ZTEST_EXPECTED_RESULT_FAIL = 0
Expect a test to fail.
-
enumerator ZTEST_EXPECTED_RESULT_SKIP
Expect a test to pass.
-
enumerator ZTEST_EXPECTED_RESULT_FAIL = 0
-
enum ztest_result
The result of the current running test.
It’s possible that the setup function sets the result to ZTEST_RESULT_SUITE_* which will apply the failure/skip to every test in the suite.
Values:
-
enumerator ZTEST_RESULT_PENDING
-
enumerator ZTEST_RESULT_PASS
-
enumerator ZTEST_RESULT_FAIL
-
enumerator ZTEST_RESULT_SKIP
-
enumerator ZTEST_RESULT_SUITE_SKIP
-
enumerator ZTEST_RESULT_SUITE_FAIL
-
enumerator ZTEST_RESULT_PENDING
-
enum ztest_phase
Each enum member represents a distinct phase of execution for the test binary.
TEST_PHASE_FRAMEWORK is active when internal ztest code is executing; the rest refer to corresponding phases of user test code.
Values:
-
enumerator TEST_PHASE_SETUP
-
enumerator TEST_PHASE_BEFORE
-
enumerator TEST_PHASE_TEST
-
enumerator TEST_PHASE_AFTER
-
enumerator TEST_PHASE_TEARDOWN
-
enumerator TEST_PHASE_FRAMEWORK
-
enumerator TEST_PHASE_SETUP
Functions
-
void ztest_run_all(const void *state, bool shuffle, int suite_iter, int case_iter)
Default entry point for running or listing registered unit tests.
- Parameters:
state – The current state of the machine as it relates to the test executable.
shuffle – Shuffle tests
suite_iter – Test suite repetitions.
case_iter – Test case repetitions.
-
int ztest_run_test_suites(const void *state, bool shuffle, int suite_iter, int case_iter)
Run the registered unit tests which return true from their predicate function.
- Parameters:
state – The current state of the machine as it relates to the test executable.
shuffle – Shuffle tests
suite_iter – Test suite repetitions.
case_iter – Test case repetitions.
- Returns:
The number of tests that ran.
-
void ztest_verify_all_test_suites_ran(void)
Fails the test if any of the registered tests did not run.
When registering test suites, a pragma function can be provided to determine WHEN the test should run. It is possible that a test suite could be registered but the pragma always prevents it from running. In cases where a test should make sure that ALL suites ran at least once, this function may be called at the end of test_main(). It will cause the test to fail if any suite was registered but never ran.
-
void ztest_test_fail(void)
Fail the currently running test.
This is the function called from failed assertions and the like. You probably don’t need to call it yourself.
-
void ztest_test_pass(void)
Pass the currently running test.
Normally a test passes just by returning without an assertion failure. However, if the success case for your test involves a fatal fault, you can call this function from k_sys_fatal_error_handler to indicate that the test passed before aborting the thread.
-
void ztest_test_skip(void)
Skip the current test.
-
void ztest_skip_failed_assumption(void)
-
void ztest_simple_1cpu_before(void *data)
A ‘before’ function to use in test suites that just need to start 1cpu.
Ignores data, and calls z_test_1cpu_start()
- Parameters:
data – The test suite’s data
-
void ztest_simple_1cpu_after(void *data)
A ‘after’ function to use in test suites that just need to stop 1cpu.
Ignores data, and calls z_test_1cpu_stop()
- Parameters:
data – The test suite’s data
Variables
-
struct k_mem_partition ztest_mem_partition
-
struct ztest_expected_result_entry
- #include <ztest_test.h>
A single expectation entry allowing tests to fail/skip and be considered passing.
See also
See also
Public Members
-
const char *test_suite_name
The test suite’s name for the expectation.
-
const char *test_name
The test’s name for the expectation.
-
enum ztest_expected_result expected_result
The expectation.
-
const char *test_suite_name
-
struct ztest_unit_test
- #include <ztest_test.h>
Public Members
-
struct ztest_unit_test_stats *const stats
Stats.
-
struct ztest_unit_test_stats *const stats
-
struct ztest_suite_stats
- #include <ztest_test.h>
Stats about a ztest suite.
-
struct ztest_unit_test_stats
- #include <ztest_test.h>
Public Members
-
uint32_t run_count
The number of times that the test ran.
-
uint32_t skip_count
The number of times that the test was skipped.
-
uint32_t fail_count
The number of times that the test failed.
-
uint32_t pass_count
The number of times that the test passed.
-
uint32_t duration_worst_ms
The longest duration of the test across multiple times.
-
uint32_t run_count
-
struct ztest_suite_node
- #include <ztest_test.h>
A single node of test suite.
Each node should be added to a single linker section which will allow ztest_run_test_suites() to iterate over the various nodes.
Public Members
-
const char *const name
The name of the test suite.
-
const ztest_suite_setup_t setup
Setup function.
-
const ztest_suite_before_t before
Before function.
-
const ztest_suite_after_t after
After function.
-
const ztest_suite_teardown_t teardown
Teardown function.
-
const ztest_suite_predicate_t predicate
Optional predicate filter.
-
struct ztest_suite_stats *const stats
Stats.
-
const char *const name
-
struct ztest_test_rule
-
struct ztest_arch_api
- #include <ztest_test.h>
Structure for architecture specific APIs.
-
ZTEST_EXPECT_FAIL(_suite_name, _test_name)
Assertions
These macros will instantly fail the test if the related assertion fails.
When an assertion fails, it will print the current file, line and function,
alongside a reason for the failure and an optional message. If the config
CONFIG_ZTEST_ASSERT_VERBOSE
is 0, the assertions will only print the
file and line numbers, reducing the binary size of the test.
Example output for a failed macro from
zassert_equal(buf->ref, 2, "Invalid refcount")
:
Assertion failed at main.c:62: test_get_single_buffer: Invalid refcount (buf->ref not equal to 2)
Aborted at unit test function
- group ztest_assert
This module provides assertions when using Ztest.
Defines
-
zassert(cond, default_msg, ...)
-
zassume(cond, default_msg, ...)
-
zexpect(cond, default_msg, ...)
-
zassert_unreachable(...)
Assert that this function call won’t be reached.
- Parameters:
... – Optional message and variables to print if the assertion fails
-
zassert_true(cond, ...)
Assert that cond is true.
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the assertion fails
-
zassert_false(cond, ...)
Assert that cond is false.
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the assertion fails
-
zassert_ok(cond, ...)
Assert that cond is 0 (success)
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the assertion fails
-
zassert_not_ok(cond, ...)
Assert that cond is not 0 (failure)
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the assertion fails
-
zassert_is_null(ptr, ...)
Assert that ptr is NULL.
- Parameters:
ptr – Pointer to compare
... – Optional message and variables to print if the assertion fails
-
zassert_not_null(ptr, ...)
Assert that ptr is not NULL.
- Parameters:
ptr – Pointer to compare
... – Optional message and variables to print if the assertion fails
-
zassert_equal(a, b, ...)
Assert that a equals b.
a and b won’t be converted and will be compared directly.
- Parameters:
a – Value to compare
b – Value to compare
... – Optional message and variables to print if the assertion fails
-
zassert_not_equal(a, b, ...)
Assert that a does not equal b.
a and b won’t be converted and will be compared directly.
- Parameters:
a – Value to compare
b – Value to compare
... – Optional message and variables to print if the assertion fails
-
zassert_equal_ptr(a, b, ...)
Assert that a equals b.
a and b will be converted to
void *
before comparing.- Parameters:
a – Value to compare
b – Value to compare
... – Optional message and variables to print if the assertion fails
-
zassert_within(a, b, d, ...)
Assert that a is within b with delta d.
- Parameters:
a – Value to compare
b – Value to compare
d – Delta
... – Optional message and variables to print if the assertion fails
-
zassert_between_inclusive(a, l, u, ...)
Assert that a is greater than or equal to l and less than or equal to u.
- Parameters:
a – Value to compare
l – Lower limit
u – Upper limit
... – Optional message and variables to print if the assertion fails
-
zassert_mem_equal(...)
Assert that 2 memory buffers have the same contents.
This macro calls the final memory comparison assertion macro. Using double expansion allows providing some arguments by macros that would expand to more than one values (ANSI-C99 defines that all the macro arguments have to be expanded before macro call).
- Parameters:
... – Arguments, see zassert_mem_equal__ for real arguments accepted.
-
zassert_mem_equal__(buf, exp, size, ...)
Internal assert that 2 memory buffers have the same contents.
Note
This is internal macro, to be used as a second expansion. See zassert_mem_equal.
- Parameters:
buf – Buffer to compare
exp – Buffer with expected contents
size – Size of buffers
... – Optional message and variables to print if the assertion fails
-
zassert_str_equal(s1, s2, ...)
Assert that 2 strings have the same contents.
- Parameters:
s1 – The first string
s2 – The second string
... – Optional message and variables to print if the expectation fails
-
zassert(cond, default_msg, ...)
Expectations
These macros will continue test execution if the related expectation fails and subsequently fail the
test at the end of its execution. When an expectation fails, it will print the current file, line,
and function, alongside a reason for the failure and an optional message but continue executing the
test. If the config CONFIG_ZTEST_ASSERT_VERBOSE
is 0, the expectations will only print the
file and line numbers, reducing the binary size of the test.
For example, if the following expectations fail:
zexpect_equal(buf->ref, 2, "Invalid refcount");
zexpect_equal(buf->ref, 1337, "Invalid refcount");
The output will look something like:
START - test_get_single_buffer
Expectation failed at main.c:62: test_get_single_buffer: Invalid refcount (buf->ref not equal to 2)
Expectation failed at main.c:63: test_get_single_buffer: Invalid refcount (buf->ref not equal to 1337)
FAIL - test_get_single_buffer in 0.0 seconds
- group ztest_expect
This module provides expectations when using Ztest.
Defines
-
zexpect_true(cond, ...)
Expect that cond is true, otherwise mark test as failed but continue its execution.
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the expectation fails
-
zexpect_false(cond, ...)
Expect that cond is false, otherwise mark test as failed but continue its execution.
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the expectation fails
-
zexpect_ok(cond, ...)
Expect that cond is 0 (success), otherwise mark test as failed but continue its execution.
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the expectation fails
-
zexpect_not_ok(cond, ...)
Expect that cond is not 0 (failure), otherwise mark test as failed but continue its execution.
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the expectation fails
-
zexpect_is_null(ptr, ...)
Expect that ptr is NULL, otherwise mark test as failed but continue its execution.
- Parameters:
ptr – Pointer to compare
... – Optional message and variables to print if the expectation fails
-
zexpect_not_null(ptr, ...)
Expect that ptr is not NULL, otherwise mark test as failed but continue its execution.
- Parameters:
ptr – Pointer to compare
... – Optional message and variables to print if the expectation fails
-
zexpect_equal(a, b, ...)
Expect that a equals b, otherwise mark test as failed but continue its execution.
- Parameters:
a – Value to compare
b – Value to compare
... – Optional message and variables to print if the expectation fails
-
zexpect_not_equal(a, b, ...)
Expect that a does not equal b, otherwise mark test as failed but continue its execution.
a and b won’t be converted and will be compared directly.
- Parameters:
a – Value to compare
b – Value to compare
... – Optional message and variables to print if the expectation fails
-
zexpect_equal_ptr(a, b, ...)
Expect that a equals b, otherwise mark test as failed but continue its execution.
a and b will be converted to
void *
before comparing.- Parameters:
a – Value to compare
b – Value to compare
... – Optional message and variables to print if the expectation fails
-
zexpect_within(a, b, delta, ...)
Expect that a is within b with delta d, otherwise mark test as failed but continue its execution.
- Parameters:
a – Value to compare
b – Value to compare
delta – Difference between a and b
... – Optional message and variables to print if the expectation fails
-
zexpect_between_inclusive(a, lower, upper, ...)
Expect that a is greater than or equal to l and less than or equal to u, otherwise mark test as failed but continue its execution.
- Parameters:
a – Value to compare
lower – Lower limit
upper – Upper limit
... – Optional message and variables to print if the expectation fails
-
zexpect_mem_equal(buf, exp, size, ...)
Expect that 2 memory buffers have the same contents, otherwise mark test as failed but continue its execution.
- Parameters:
buf – Buffer to compare
exp – Buffer with expected contents
size – Size of buffers
... – Optional message and variables to print if the expectation fails
-
zexpect_str_equal(s1, s2, ...)
Expect that 2 strings have the same contents, otherwise mark test as failed but continue its execution.
- Parameters:
s1 – The first string
s2 – The second string
... – Optional message and variables to print if the expectation fails
-
zexpect_true(cond, ...)
Assumptions
These macros will instantly skip the test or suite if the related assumption fails.
When an assumption fails, it will print the current file, line, and function,
alongside a reason for the failure and an optional message. If the config
CONFIG_ZTEST_ASSERT_VERBOSE
is 0, the assumptions will only print the
file and line numbers, reducing the binary size of the test.
Example output for a failed macro from
zassume_equal(buf->ref, 2, "Invalid refcount")
:
- group ztest_assume
This module provides assumptions when using Ztest.
Defines
-
zassume_true(cond, ...)
Assume that cond is true.
If the assumption fails, the test will be marked as “skipped”.
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the assumption fails
-
zassume_false(cond, ...)
Assume that cond is false.
If the assumption fails, the test will be marked as “skipped”.
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the assumption fails
-
zassume_ok(cond, ...)
Assume that cond is 0 (success)
If the assumption fails, the test will be marked as “skipped”.
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the assumption fails
-
zassume_not_ok(cond, ...)
Assume that cond is not 0 (failure)
If the assumption fails, the test will be marked as “skipped”.
- Parameters:
cond – Condition to check
... – Optional message and variables to print if the assumption fails
-
zassume_is_null(ptr, ...)
Assume that ptr is NULL.
If the assumption fails, the test will be marked as “skipped”.
- Parameters:
ptr – Pointer to compare
... – Optional message and variables to print if the assumption fails
-
zassume_not_null(ptr, ...)
Assume that ptr is not NULL.
If the assumption fails, the test will be marked as “skipped”.
- Parameters:
ptr – Pointer to compare
... – Optional message and variables to print if the assumption fails
-
zassume_equal(a, b, ...)
Assume that a equals b.
a and b won’t be converted and will be compared directly. If the assumption fails, the test will be marked as “skipped”.
- Parameters:
a – Value to compare
b – Value to compare
... – Optional message and variables to print if the assumption fails
-
zassume_not_equal(a, b, ...)
Assume that a does not equal b.
a and b won’t be converted and will be compared directly. If the assumption fails, the test will be marked as “skipped”.
- Parameters:
a – Value to compare
b – Value to compare
... – Optional message and variables to print if the assumption fails
-
zassume_equal_ptr(a, b, ...)
Assume that a equals b.
a and b will be converted to
void *
before comparing. If the assumption fails, the test will be marked as “skipped”.- Parameters:
a – Value to compare
b – Value to compare
... – Optional message and variables to print if the assumption fails
-
zassume_within(a, b, d, ...)
Assume that a is within b with delta d.
If the assumption fails, the test will be marked as “skipped”.
- Parameters:
a – Value to compare
b – Value to compare
d – Delta
... – Optional message and variables to print if the assumption fails
-
zassume_between_inclusive(a, l, u, ...)
Assume that a is greater than or equal to l and less than or equal to u.
If the assumption fails, the test will be marked as “skipped”.
- Parameters:
a – Value to compare
l – Lower limit
u – Upper limit
... – Optional message and variables to print if the assumption fails
-
zassume_mem_equal(...)
Assume that 2 memory buffers have the same contents.
This macro calls the final memory comparison assumption macro. Using double expansion allows providing some arguments by macros that would expand to more than one values (ANSI-C99 defines that all the macro arguments have to be expanded before macro call).
- Parameters:
... – Arguments, see zassume_mem_equal__ for real arguments accepted.
-
zassume_mem_equal__(buf, exp, size, ...)
Internal assume that 2 memory buffers have the same contents.
If the assumption fails, the test will be marked as “skipped”.
Note
This is internal macro, to be used as a second expansion. See zassume_mem_equal.
- Parameters:
buf – Buffer to compare
exp – Buffer with expected contents
size – Size of buffers
... – Optional message and variables to print if the assumption fails
-
zassume_str_equal(s1, s2, ...)
Assumes that 2 strings have the same contents.
- Parameters:
s1 – The first string
s2 – The second string
... – Optional message and variables to print if the expectation fails
-
zassume_true(cond, ...)
Ztress
- group ztest_ztress
This module provides test stress when using Ztest.
Defines
-
ZTRESS_TIMER(handler, user_data, exec_cnt, init_timeout)
Descriptor of a k_timer handler execution context.
The handler is executed in the k_timer handler context which typically means interrupt context. This context will preempt any other used in the set.
Note
There can only be up to one k_timer context in the set and it must be the first argument of ZTRESS_EXECUTE.
- Parameters:
handler – User handler of type ztress_handler.
user_data – User data passed to the
handler
.exec_cnt – Number of handler executions to complete the test. If 0 then this is not included in completion criteria.
init_timeout – Initial backoff time base (given in k_timeout_t). It is adjusted during the test to optimize CPU load. The actual timeout used for the timer is randomized.
-
ZTRESS_THREAD(handler, user_data, exec_cnt, preempt_cnt, init_timeout)
Descriptor of a thread execution context.
The handler is executed in the thread context. The priority of the thread is determined based on the order in which contexts are listed in ZTRESS_EXECUTE.
Note
thread sleeps for random amount of time. Additionally, the thread busy-waits for a random length of time to further increase randomization in the test.
- Parameters:
handler – User handler of type ztress_handler.
user_data – User data passed to the
handler
.exec_cnt – Number of handler executions to complete the test. If 0 then this is not included in completion criteria.
preempt_cnt – Number of preemptions of that context to complete the test. If 0 then this is not included in completion criteria.
init_timeout – Initial backoff time base (given in k_timeout_t). It is adjusted during the test to optimize CPU load. The actual timeout used for sleeping is randomized.
-
ZTRESS_CONTEXT_INITIALIZER(_handler, _user_data, _exec_cnt, _preempt_cnt, _t)
Initialize context structure.
For argument types see ztress_context_data. For more details see ZTRESS_THREAD.
- Parameters:
_handler – Handler.
_user_data – User data passed to the handler.
_exec_cnt – Execution count limit.
_preempt_cnt – Preemption count limit.
_t – Initial timeout.
-
ZTRESS_EXECUTE(...)
Setup and run stress test.
It initialises all contexts and calls ztress_execute.
- Parameters:
... – List of contexts. Contexts are configured using ZTRESS_TIMER and ZTRESS_THREAD macros. ZTRESS_TIMER must be the first argument if used. Each thread context has an assigned priority. The priority is assigned in a descending order (first listed thread context has the highest priority). The maximum number of supported thread contexts, including the timer context, is configurable in Kconfig (ZTRESS_MAX_THREADS).
Typedefs
-
typedef bool (*ztress_handler)(void *user_data, uint32_t cnt, bool last, int prio)
User handler called in one of the configured contexts.
- Param user_data:
User data provided in the context descriptor.
- Param cnt:
Current execution counter. Counted from 0.
- Param last:
Flag set to true indicates that it is the last execution because completion criteria are met, test timed out or was aborted.
- Param prio:
Context priority counting from 0 which indicates the highest priority.
- Retval true:
continue test.
- Retval false:
stop executing the current context.
Functions
-
int ztress_execute(struct ztress_context_data *timer_data, struct ztress_context_data *thread_data, size_t cnt)
Execute contexts.
The test runs until all completion requirements are met or until the test times out (use ztress_set_timeout to configure timeout) or until the test is aborted (ztress_abort).
on test completion a report is printed (ztress_report is called internally).
- Parameters:
timer_data – Timer context. NULL if timer context is not used.
thread_data – List of thread contexts descriptors in priority descending order.
cnt – Number of thread contexts.
- Return values:
-EINVAL – If configuration is invalid.
0 – if test is successfully performed.
-
void ztress_abort(void)
Abort ongoing stress test.
-
void ztress_set_timeout(k_timeout_t t)
Set test timeout.
Test is terminated after timeout disregarding completion criteria. Setting is persistent between executions.
- Parameters:
t – Timeout.
-
void ztress_report(void)
Print last test report.
Report contains number of executions and preemptions for each context, initial and adjusted timeouts and CPU load during the test.
-
int ztress_exec_count(uint32_t id)
Get number of executions of a given context in the last test.
- Parameters:
id – Context id. 0 means the highest priority.
- Returns:
Number of executions.
-
int ztress_preempt_count(uint32_t id)
Get number of preemptions of a given context in the last test.
- Parameters:
id – Context id. 0 means the highest priority.
- Returns:
Number of preemptions.
-
uint32_t ztress_optimized_ticks(uint32_t id)
Get optimized timeout base of a given context in the last test.
Optimized value can be used to update initial value. It will improve the test since optimal CPU load will be reach immediately.
- Parameters:
id – Context id. 0 means the highest priority.
- Returns:
Optimized timeout base.
-
struct ztress_context_data
- #include <ztress.h>
-
ZTRESS_TIMER(handler, user_data, exec_cnt, init_timeout)
Mocking via FFF
Zephyr has integrated with FFF for mocking. See FFF for documentation. To use it, include the relevant header:
#include <zephyr/fff.h>
Zephyr provides several FFF-based fake drivers which can be used as either stubs or mocks. Fake driver instances are configured via Devicetree and Configuration System (Kconfig). See the following devicetree bindings for more information:
Zephyr also has defined extensions to FFF for simplified declarations of fake functions. See FFF Extensions.
Customizing Test Output
Customization is enabled by setting CONFIG_ZTEST_TC_UTIL_USER_OVERRIDE
to “y”
and adding a file tc_util_user_override.h
with your overrides.
Add the line zephyr_include_directories(my_folder)
to
your project’s CMakeLists.txt
to let Zephyr find your header file during builds.
See the file subsys/testsuite/include/zephyr/tc_util.h to see which macros and/or defines can be overridden. These will be surrounded by blocks such as:
#ifndef SOMETHING
#define SOMETHING <default implementation>
#endif /* SOMETHING */
Shuffling Test Sequence
By default the tests are sorted and ran in alphanumerical order. Test cases may
be dependent on this sequence. Enable CONFIG_ZTEST_SHUFFLE
to
randomize the order. The output from the test will display the seed for failed
tests. For native simulator builds you can provide the seed as an argument to
twister with –seed
Static configuration of ZTEST_SHUFFLE contains:
CONFIG_ZTEST_SHUFFLE_SUITE_REPEAT_COUNT
- Number of iterations the test suite will run.
CONFIG_ZTEST_SHUFFLE_TEST_REPEAT_COUNT
- Number of iterations the test will run.
Test Selection
For tests built for native simulator, use command line arguments to list
or select tests to run. The test argument expects a comma separated list
of suite::test
. You can substitute the test name with an *
to run all
tests within a suite.
For example
$ zephyr.exe -list
$ zephyr.exe -test="fixture_tests::test_fixture_pointer,framework_tests::test_assert_mem_equal"
$ zephyr.exe -test="framework_tests::*"
FFF Extensions
- group fff_extensions
This module provides extensions to FFF for simplifying the configuration and usage of fakes.
Defines
-
RETURN_HANDLED_CONTEXT(FUNCNAME, CONTEXTTYPE, RESULTFIELD, CONTEXTPTRNAME, HANDLERBODY)
Wrap custom fake body to extract defined context struct.
Add extension macro for simplified creation of fake functions needing call-specific context data.
This macro enables a fake to be implemented as follows and requires no familiarity with the inner workings of FFF.
struct FUNCNAME##_custom_fake_context { struct instance * const instance; int result; }; int FUNCNAME##_custom_fake( const struct instance **instance_out) { RETURN_HANDLED_CONTEXT( FUNCNAME, struct FUNCNAME##_custom_fake_context, result, context, { if (context != NULL) { if (context->result == 0) { if (instance_out != NULL) { *instance_out = context->instance; } } return context->result; } return FUNCNAME##_fake.return_val; } ); }
- Parameters:
FUNCNAME – Name of function being faked
CONTEXTTYPE – type of custom defined fake context struct
RESULTFIELD – name of field holding the return type & value
CONTEXTPTRNAME – expected name of pointer to custom defined fake context struct
HANDLERBODY – in-line custom fake handling logic
-
RETURN_HANDLED_CONTEXT(FUNCNAME, CONTEXTTYPE, RESULTFIELD, CONTEXTPTRNAME, HANDLERBODY)