.. _ztest_benchmarking: Benchmarking Framework ###################### The Zephyr benchmarking framework provides cycle-accurate performance measurements. It automates data collection and statistical calculation, offering a standardized way to evaluate execution metrics across the Zephyr ecosystem. Overview ******** This framework helps identify regressions and optimize critical paths by providing: * **Standardized API**: Macros that align with existing ``ztest`` conventions. * **Statistical Analysis**: Calculations of Mean, Standard Deviation, Standard Error, and Min/Max values. * **Overhead Compensation**: Inclusion of a control test to account for the benchmarking frameworks own execution time. Configuration ************* To use the benchmarking framework, you must enable the following Kconfig options: .. code-block:: cfg CONFIG_ZTEST=y CONFIG_ZTEST_BENCHMARK=y Usage ***** A benchmark suite is defined similarly to the normal ztest testsuite by first defining the suite with ``ZTEST_BENCHMARK_SUITE`` and then adding individual benchmark tests to the suite using either ``ZTEST_BENCHMARK`` or ``ZTEST_BENCHMARK_TIMED`` macros. .. code-block:: c #include ZTEST_BENCHMARK_SUITE(, , ); Standard Benchmarks =================== Standard benchmarks are sample-based, meaning they execute the test a specified number of times and measure the total cycles taken. This is useful for benchmarking critical paths where you want to understand the raw CPU performance in terms of cycles. It provides insights into the efficiency of the code and helps identify bottlenecks in terms of CPU usage. This benchmarking method is suitable for code that has consistent execution times and is not heavily influenced by external factors such as I/O operations or context switches. .. code-block:: c #include ZTEST_BENCHMARK_SUITE(, NULL, NULL); ZTEST_BENCHMARK(, , ) { /* Code to benchmark */ } A standard benchmark follows a flow where the setup function is called before each sample, the test function is executed for the specified number of samples, and the teardown function is called after each sample. Timed Benchmarks ================ Timed benchmarks in contrast to the standard benchmarks measures execution time of the code instead of cycles. This is useful for benchmarking code that may have variable execution times or when you want to measure the actual time taken rather than just CPU cycles of a critical path. It provides a broader view of performance characteristics, especially for code that involves I/O operations, context switches, or other factors that can influence execution time beyond raw CPU performance. .. code-block:: c ZTEST_BENCHMARK_TIMED(, ,