Skip to content

Conversation

furszy
Copy link
Member

@furszy furszy commented Sep 4, 2025

Early Note:
Don’t be scared by the PR’s line changes count — most of it’s just doc or part of the test framework API.

Context:
Currently, all tests run single-threaded sequentially and the library lacks the ability to specify which test (or group of tests) you would like to run. This is not only inconvenient as more tests are added but also time consuming during development and affects downstream projects that may want to parallelize the workload (such as Bitcoin-Core CI).

PR Goal:
Introduce a lightweight, extensible C89 unit test framework with no dynamic memory allocations, providing a structured way to register, execute, and report tests. The framework supports named command-line arguments in -key=value form, parallel test execution across multiple worker processes, granular test selection (selecting tests either by name or by module name), and time accumulation reports.

The introduced framework supports:

  • -help or -h: display list of available commands along with their descriptions.
  • -jobs=<num>: distribute tests across multiple worker processes (default: sequential if 0).
  • -target=<name> or -t=<name>: run only specific tests by name; can be repeated to select multiple tests.
  • -target=<module name>, -t=<module> Run all tests within a specific module (can be provided multiple times)
  • -seed=<hex>: set a specific RNG seed (defaults to random if unspecified).
  • -iterations=<n>: specify the number of iterations.
  • -list_tests: display list of available tests and modules you can run.
  • -log=<0|1>: enable or disable test execution logging (default: 0 = disabled).

Beyond these features, the idea is to also make future developments smoother, as adding new tests require only a single entry in the central test registry, and new command-line options can be introduced easily by extending the framework’s parse_arg() function.

Compatibility Note:
The framework continues accepting the two positional arguments previously supported (iterations and seed), ensuring existing workflows remain intact.

Testing Notes:
Have fun. You can quickly try it through ./tests -j=<workers_num> for parallel execution or ./tests -t=<test_name> to run a specific test (call ./tests -print_tests to display all available tests and modules).

Extra Note:
I haven't checked the exhaustive tests file so far, but I will soon. For now, this only runs all tests declared in the tests binary.

Testing Results: (Current master branch vs PR in seconds)

  • Raspberry Pi 5: master ~100 s → PR ~38 s (5 jobs)
  • MacBook Pro M1: master ~30 s → PR ~10 s (6 jobs)

@furszy furszy changed the title Introduce (mini) unit test framework WIP: Introduce (mini) unit test framework Sep 4, 2025
@furszy furszy force-pushed the 2025_unit_test_framework branch 5 times, most recently from 16822f5 to 5fcb69c Compare September 5, 2025 01:05
@real-or-random real-or-random linked an issue Sep 5, 2025 that may be closed by this pull request
@real-or-random
Copy link
Contributor

Nice!

I looked at existing unit test frameworks in the past, but nothing seemed appropriate for us. There are not that many for C, and they were either overkill or too simple (just handful of ifdefs) so they didn't add any functionality. I thought writing our own is too annoying (or I was just lazy). But the framework is ~300 lines, that seems fine to me.

@real-or-random
Copy link
Contributor

see also #1211

@furszy furszy force-pushed the 2025_unit_test_framework branch 2 times, most recently from b1de641 to 7b184a1 Compare September 5, 2025 18:36
@furszy furszy force-pushed the 2025_unit_test_framework branch 3 times, most recently from 9389623 to f49e570 Compare September 6, 2025 14:37
Copy link
Contributor

@jonasnick jonasnick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @furszy. I played around with this a little bit. It reduces the execution time on my machine from 26 seconds to 10 seconds (-jobs=16). Very nice! Some observations:

  • It would be helpful if a helptext would be output when tests is run with -h or --help.
  • I think showing all the tests that have passed is a bit overkill. I'm already assuming that the tests pass if they do not show up in the output. I only need to see tests that don't pass.
  • Maybe a future PR can autodetect the number of cores and set -jobs automatically by default?
  • There's an -iter command line flag, but the test output shows "test count". It would be better if we were consistent.

@jonasnick
Copy link
Contributor

  • The available "targets" seem to be a bit arbitary. Maybe we can try to put them into groups (perhaps similar to the grouping in #1211)?
    ./tests -print_tests
    Available tests (58):
        --------------------------------------------------
        [  1] selftest_tests
        [  2] all_proper_context_tests
        [  3] all_static_context_tests
        [  4] deprecated_context_flags_test
        [  5] scratch_tests
        [  6] int128_tests
        [  7] ctz_tests
        [  8] modinv_tests
        [  9] inverse_tests
        [ 10] hsort_tests
        [ 11] sha256_known_output_tests
        [ 12] sha256_counter_tests
        [ 13] hmac_sha256_tests
        [ 14] rfc6979_hmac_sha256_tests
        [ 15] tagged_sha256_tests
        [ 16] scalar_tests
        [ 17] field_half
        [ 18] field_misc
        [ 19] field_convert
        [ 20] field_be32_overflow
        [ 21] fe_mul
        [ 22] sqr
        [ 23] sqrt
        [ 24] ge
        [ 25] gej
        [ 26] group_decompress
        [ 27] ecmult_pre_g
        [ 28] wnaf
        [ 29] point_times_order
        [ 30] ecmult_near_split_bound
        [ 31] ecmult_chain
        [ 32] ecmult_constants
        [ 33] ecmult_gen_blind
        [ 34] ecmult_const_tests
        [ 35] ecmult_multi_tests
        [ 36] ec_combine
        [ 37] endomorphism_tests
        [ 38] ec_pubkey_parse_test
        [ 39] eckey_edge_case_test
        [ 40] eckey_negate_test
        [ 41] ecdh_tests
        [ 42] ec_illegal_argument_tests
        [ 43] pubkey_comparison
        [ 44] pubkey_sort
        [ 45] random_pubkeys
        [ 46] ecdsa_der_parse
        [ 47] ecdsa_sign_verify
        [ 48] ecdsa_end_to_end
        [ 49] ecdsa_edge_cases
        [ 50] ecdsa_wycheproof
        [ 51] extrakeys_tests
        [ 52] schnorrsig_tests
        [ 53] musig_tests
        [ 54] ellswift_tests
        [ 55] secp256k1_memczero_test
        [ 56] secp256k1_is_zero_array_test
        [ 57] secp256k1_byteorder_tests
        [ 58] cmov_tests
    

@furszy furszy force-pushed the 2025_unit_test_framework branch 3 times, most recently from af43348 to 0b3c74f Compare September 8, 2025 19:29
@furszy
Copy link
Member Author

furszy commented Sep 9, 2025

Thanks for the review jonasnick!

Thanks @furszy. I played around with this a little bit. It reduces the execution time on my machine from 26 seconds to 10 seconds (-jobs=16). Very nice!

Awesome :). I think we can actually do even better, will do some changes.

  • It would be helpful if a helptext would be output when tests is run with -h or --help.

The help message was actually already there, but for -help only (with a single - and I forgot to add it to the PR description).
I just pushed support for -h as well.

  • I think showing all the tests that have passed is a bit overkill. I'm already assuming that the tests pass if they do not show up in the output. I only need to see tests that don't pass.

Sure. Will hide the logging behind a -log option (or -silent if we want the opposite behavior).
From my experience, logging sometimes helps spot regressions or areas that can be improved (like when a test suddenly takes longer than usual), and it’s also reassuring to see them run and pass when you have a large number of them.
I’ve also been thinking about adding per-test execution time loggings, might be a good opportunity to include it too.

  • Maybe a future PR can autodetect the number of cores and set -jobs automatically by default?

I'm not sure we want that. Sequential execution is usually "standard" on any system because we don’t know what else the user might be running. Picking a number of parallel tasks automatically (even if it is a low number) could hang the CPU or even make it run slower than sequential if the system is overloaded.

  • There's an -iter command line flag, but the test output shows "test count". It would be better if we were consistent.

Sure 👍🏼. That was carried over from the previous code; will improve it.

@furszy
Copy link
Member Author

furszy commented Sep 9, 2025

The available "targets" seem to be a bit arbitary. Maybe we can try to put them into groups (perhaps similar to the grouping in #1211)?

Yeah. Just reworked the framework to support registering and running groups of tests in a generic manner. This means we can now run specific tests and/or specific groups of tests via the -target/-t arg.

On top of that, made the framework reusable across binaries and improved the overall API (we can now easily connect the tests_exhaustive binary to it too — probably something for a follow-up), along with improvements to the consumers’ structure, enforcing consistency and a specific pattern that all consumers will follow.

Other than that, the -print_tests option was improved to display the available modules and tests.

A simple usage example:
./tests -print_tests → pick any module (like field) and run: ./tests -t=field
You can also combine this with -j=<num_workers>, plus specify multiple targets as needed.

@furszy furszy force-pushed the 2025_unit_test_framework branch from 0b3c74f to 4900aee Compare September 9, 2025 15:16
@furszy furszy force-pushed the 2025_unit_test_framework branch from 4900aee to aa5f041 Compare September 9, 2025 19:41
@furszy furszy changed the title WIP: Introduce (mini) unit test framework Introduce (mini) unit test framework Sep 9, 2025
@furszy furszy force-pushed the 2025_unit_test_framework branch from 0ddcd09 to fed7e38 Compare September 26, 2025 17:14
@furszy
Copy link
Member Author

furszy commented Sep 26, 2025

Thanks for the feedback, hebasto and theStack.
Merged the -help/print_tests options parsing with the general parsing procedure to fix the missing double-dash case and removed the UTF-8 arrow.

Copy link
Contributor

@theStack theStack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK fed7e38

Tested a bit more and verified that all existing tests have been adapted to the new framework. Didn't review the autotools and CMake changes in-depth as I'm not too familiar with build systems. Left only a few non-blocking nits below. Also, dumping some nice-to-have follow-up ideas came to my mind (not sure if introducing additional complexity is worth it though):

  • show summary at the end (number of executed/passed/failed tests)
  • if a single test fails, still run the others (would need some deeper changes though, as right now we just crash on a failed condition)
  • allow to run tests specified by index numbers shown in --print_tests

nit: unit_test.c is still in UTF-8 according to the file utility.

@furszy furszy force-pushed the 2025_unit_test_framework branch from fed7e38 to 10eee5d Compare September 28, 2025 19:52
Copy link
Member Author

@furszy furszy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: unit_test.c is still in UTF-8 according to the file utility.

Fixed now. Thanks.

#define CASE(name) { #name, run_##name }
#define CASE1(name) { #name, name }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, having this in a follow-up PR is fine. (But also here is fine.)

@furszy furszy force-pushed the 2025_unit_test_framework branch 3 times, most recently from eb30dfa to 9a0214b Compare September 29, 2025 18:54
@hebasto
Copy link
Member

hebasto commented Sep 29, 2025

To fix Autotools:

diff --git a/Makefile.am b/Makefile.am
index d379c3f..dc79857 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -123,7 +123,7 @@ if USE_TESTS
 TESTS += noverify_tests
 noinst_PROGRAMS += noverify_tests
 noverify_tests_SOURCES = src/tests.c
-noverify_tests_CPPFLAGS = $(SECP_CONFIG_DEFINES) -DSUPPORTS_CONCURRENCY=$(SUPPORTS_CONCURRENCY)
+noverify_tests_CPPFLAGS = $(SECP_CONFIG_DEFINES) $(TEST_DEFINES)
 noverify_tests_LDADD = $(COMMON_LIB) $(PRECOMPUTED_LIB)
 noverify_tests_LDFLAGS = -static
 if !ENABLE_COVERAGE
diff --git a/configure.ac b/configure.ac
index 55eebdf..6028ee2 100644
--- a/configure.ac
+++ b/configure.ac
@@ -447,8 +447,8 @@ fi
 if test "x$enable_tests" != x"no"; then
   AC_CHECK_HEADERS([sys/types.h sys/wait.h unistd.h])
   AS_IF([test "x$ac_cv_header_sys_types_h" = xyes && test "x$ac_cv_header_sys_wait_h" = xyes &&
-         test "x$ac_cv_header_unistd_h" = xyes], [SUPPORTS_CONCURRENCY=1])
-  AC_SUBST(SUPPORTS_CONCURRENCY)
+         test "x$ac_cv_header_unistd_h" = xyes], [TEST_DEFINES="-DSUPPORTS_CONCURRENCY=1"], TEST_DEFINES="")
+  AC_SUBST(TEST_DEFINES)
 fi
 
 ###

@furszy furszy force-pushed the 2025_unit_test_framework branch from 9a0214b to 6aebdb4 Compare September 29, 2025 20:01
@furszy
Copy link
Member Author

furszy commented Sep 29, 2025

Thanks @hebasto! Updated with autotools patch.

Copy link
Member

@hebasto hebasto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK modulo #1734 (comment) (sorry for overlooking that earlier).

This discussion can be continued in #1724.

This comment can be addressed in a follow-up PR.

@furszy furszy force-pushed the 2025_unit_test_framework branch from 6aebdb4 to b209c65 Compare September 30, 2025 13:45
Copy link
Member

@hebasto hebasto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK b209c65.

UPD. See #1734 (comment)

Copy link
Contributor

@sipa sipa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK

Two overall comments:

  • I saw some discussion about it, but I'm unconvinced about ignoring unknown arguments. I think the risk of silently ignoring a misspelled option is worse than the complexity of dealing with testing across versions.
  • A big feature I'm missing is integration into the build framework, such that e.g. test invocations get automatically split by module, and have ctest --test_dir=build -j16 run the tests binary once for each module. This would automatically give well-balanced parallellism, that's automatically integrated into CI, even for projects that include it (Bitcoin Core). Is something like that planned?

src/unit_test.c Outdated
}
}

/* Now that we have all sub-processes, distribute workload in round-robin */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In commit "test: introduce (mini) unit test framework"

As long as there are no more than 256 tests (or some other metric to break them up by), you can use the make parallellism trick.

Have a single pipe, which the master process writes to, and all child workers read from. The master just writers single-byte test identifiers to the pipe, and the workers read from it. Every byte will be read by exactly one worker process, which executes the test, and then reads another byte until done.

This gives you a super cheap and portable way of actually distributing the jobs equitably (round robin risks giving all long-running tests to the same process for example).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super neat! Thanks for the suggestion. It’s pretty amazing that the kernel can handle the pipe synchronization for us.

Copy link
Contributor

@real-or-random real-or-random Oct 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you've implemented it already. It's indeed neat. We currently have 89 tests [1], so there's some room for the foreseeable future (unless we start splitting the cases up too granularly).

[1] It is probably an attribution to the wonderful C89, which popped up here multiple times. 😄

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We currently have 89 tests [1] ...

At the high risk of totally ruining the C joke, I'm only seeing 88 tests on the latest push (2f4546c):

$ ./build/bin/tests -l | head -n 5

Available tests (16 modules):
========================================
Module: general (5 tests)
        [  1] selftest_tests
$ ./build/bin/tests -l | tail -n 7
        [ 87] secp256k1_byteorder_tests
        [ 88] cmov_tests
----------------------------------------

Run specific module: ./tests -t=<module_name>
Run specific test: ./tests -t=<test_name>

Did you indeed have 89 earlier (or, my best guess, had the same output, but assumed the test list starts counting at zero?). Just checking that all tests are covered, though I'm pretty certain they are, based on manual verification from an earlier review round.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I probably just miscounted. I had counted manually using grep [-c] CASE( and grep [-c] CASE1(, trying to ignore false positives.

src/tests.c Outdated
free(STATIC_CTX);
secp256k1_context_destroy(CTX);

testrand_finish();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In commit "test: introduce (mini) unit test framework"

The testrand_finish() call in here, when in parallel mode, isn't really useful, because the actually used randomness is in the child processes, without communication to the parent where this runs.

The point of this "random run =" output line is verifying if two test runs are actually identical (not just the seed was the same, but all output of the RNG was the same). I don't think it's all that important, so if it breaks with this, it might be worth just removing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a great catch. Concept ACK on removing the line.

But this reminds me of the purpose of outputting the initial seed, namely reproducibility. With that in mind, the output should also contain the target parameter. I think seed, target, and jobs together should make it possible to reproduce the run exactly. Perhaps it's nicer to simply output an entire command line like To reproduce, run ./tests --jobs ... --target ... --seed ... or something like that.

@hebasto
Copy link
Member

hebasto commented Sep 30, 2025

  • A big feature I'm missing is integration into the build framework, such that e.g. test invocations get automatically split by module, and have ctest --test_dir=build -j16 run the tests binary once for each module. This would automatically give well-balanced parallellism, that's automatically integrated into CI, even for projects that include it (Bitcoin Core). Is something like that planned?

See #1734 (comment).

@furszy furszy force-pushed the 2025_unit_test_framework branch from b209c65 to eed1390 Compare September 30, 2025 20:01
furszy added 6 commits October 1, 2025 10:17
Lightweight unit testing framework, providing a structured way to define,
execute, and report tests. It includes a central test registry, a flexible
command-line argument parser of the form "--key=value" / "-k=value" /
"-key=value" (facilitating future framework extensions), ability to run
tests in parallel and accumulated test time logging reports.

So far the supported command-line args are:
- "--jobs=<num>" or "-j=<num>" to specify the number of parallel workers.
- "--seed=<hex>" to specify the RNG seed (random if not set).
- "--iterations=<num>" or "-i=<num>" to specify the number of iterations.

Compatibility Note:
To stay compatible with previous versions, the framework also supports
the two original positional arguments: the iterations count and the
RNG seed (in that order).
This not only provides a structural improvement but also
allows us to (1) specify individual tests to run and (2)
execute each of them concurrently.
Add a help message for the test suite, documenting available options,
defaults, and backward-compatible positional arguments.
Add support for specifying single tests or modules to run via the
"--target" or "-t" command-line option. Multiple targets can be
provided; only the specified tests or all tests in the specified
module/s will run instead of the full suite.

Examples:
-t=<test name> runs an specific test.
-t=<module name> runs all tests within the specified module.

Both options can be provided multiple times.
Useful option to avoid opening the large tests.c file just to find
the test case you want to run.
When enabled (--log=1), shows test start, completion, and execution time.
@furszy furszy force-pushed the 2025_unit_test_framework branch from eed1390 to 2f4546c Compare October 1, 2025 14:20
@furszy
Copy link
Member Author

furszy commented Oct 1, 2025

Updated per feedback. Thanks!

  1. Removed testrand_finish() call.
  2. Added a more descriptive error message for the 255 tests current cap.

Copy link
Contributor

@theStack theStack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

re-ACK 2f4546c

Sequential vs. parallel test runs on my arm64 workstation with 12 cores, seeing a nice ~2.7x speedup (I guess much more is possible in the futureif the longest-running tests are split up further):

$ uname -m -o -s -r -v
Linux 6.17.0-8-qcom-x1e #8-Ubuntu SMP PREEMPT_DYNAMIC Sun Aug 31 21:03:54 UTC 2025 aarch64 GNU/Linux
$ ./build/bin/tests --jobs=0
Tests running silently. Use '-log=1' to enable detailed logging
iterations = 16
jobs = 0. Sequential execution.
random seed = 09e628902f7e42bc5be35e96ebf5edee
Total execution time: 24.329 seconds
$ ./build/bin/tests --jobs=$(nproc)
Tests running silently. Use '-log=1' to enable detailed logging
iterations = 16
jobs = 12. Parallel execution.
random seed = 08636e4354f2ffa33260a6506827c00d
Total execution time: 9.002 seconds

(EDIT: noted after posting that I didn't compile in the recovery module, i.e. its 4 tests were not executed. But this doesn't change the numbers significantly.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Running a single test file More parallel tests
9 participants