Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

std.testing: fail and fatal formatted messaging #17497

Closed
wants to merge 4 commits into from

Conversation

pascaldekloe
Copy link
Contributor

Two new functions in std.testing:

/// Fail the test with a formatted message but continue execution, unlike fatal.
/// The common pattern is as follows.
///
/// ```zig
///     if (got != want)
///         testing.fail("got {}, want {}", got, want);
/// ```
pub fn fail(comptime fmt: []const u8, args: anytype) void {
/// Fatal test scenario must return immediately, unlike fail.
///
/// ```zig
///     const got = foo(x) catch |err|
///         return testing.fatal("foo {} got error {}", .{ x, err });
/// ```
pub fn fatal(comptime fmt: []const u8, args: anytype) error{TestFatal} {

The purpose is more readable test failures, with a single line of context each.
Fail format-strings have a documenting effect on the respective test-scenario too.

The commit includes some assertion message normalization and a demonstration in net/net.zig (arbitrarily chosen).

The commit includes some assertion message normalization and a demonstration in net/net.zig (arbitrarily chosen).
lib/std/testing.zig Outdated Show resolved Hide resolved
Copy link
Contributor

@rootbeer rootbeer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks fine to me (not that I'm qualified to pass that judgement). I've got some suggestions, feel free to ignore them if they don't resonate with you.

lib/std/testing.zig Show resolved Hide resolved
lib/std/testing.zig Outdated Show resolved Hide resolved
lib/std/testing.zig Show resolved Hide resolved
Copy link
Contributor

@matu3ba matu3ba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. Unfortunately, I don't think adding a global for the count due to eventual parallelization will work.

Further more, and this is now my interpretation, Zig prefers local scope, if possible to gain simplicity of code for reasoning (see zig zen), so counting errors globally (unless necessary) does not belong into that as that feels like an integration test. Also, you can get the same, if you count the errors within the unit test, so I do not see the point.

Regarding the formatted output, I do not have too much of an opinion other than it should apply the change to the user provided error count within the unit test block.

lib/std/testing.zig Show resolved Hide resolved
@pascaldekloe
Copy link
Contributor Author

Given that Zig puts high expectations on its coders, we might be better of with a separate fn failAsync(comptime fmt: []const u8, args: anytype) !void. When reporting from other threads, then we must check whether the originating test is still running. Instead of locking print we could buffer strings in a concurrent queue, and go from there. It would be the least amount of impact in terms of performance and errors on tests without concurrency. 🤔

@pascaldekloe
Copy link
Contributor Author

What are the options for unit test context “within the unit test block” @matu3ba? Right now testing.allocator bypasses the test scope with some behind-the-secenes magic. The options I see is either a test argument or a thread local.

@matu3ba
Copy link
Contributor

matu3ba commented Oct 16, 2023

What are the options for unit test context “within the unit test block” @matu3ba?

Well, my idea was to let the user provide a function to provide the number of faliures and the failure function would be

pub fn fail(fail_cnt: &usize, comptime fmt: []const u8, args: anytype)

The user would then write something like

test "example" {
    var fail_cnt: usize = 0;
    fail(&fail_cnt, "somefailure reason {d}", .{100});
    fail(&fail_cnt, "somefailure reason {d}", .{200});
    if (fail_cnt > 0) return error.FailedExample;
}

but I'm not sure if that would work with mainTerminal stderr and the server client thing.

The test runner api not being observable from the test is a problem I did also run into during implementing a potential panic recovery test api in #15991.

@pascaldekloe
Copy link
Contributor Author

test "example" {
    const t = testing.track();
    t.mustPanic();
    t.fail("somefailure reason {d}", .{100});
    t.fail("somefailure reason {d}", .{200});
    try t.pass();
}

Either way, the inline pattern adds an initializer and a finalizer line to every test. Even worse, you can't defer the finalizer error. Failing tests will pass just fine without the final try.

Tests are initialized before test and they are judged afterwards. We need some sort of context struct (with the test Allocator and more to come) to pass both borders.

test(t: *testing.T) "example" {
    t.mustPanic()
    t.fail("some reason {d}", .{100});
    t.fail("some reason {d}", .{200});
}

Personally I enjoy using packages as structs (with their globals as fields) in a singleton context. Testing benefits from singleton guarantee if we want to support checks on exit code, signals or IPC in the future. We can pay the process overhead when compared to threading for reliability and simplicity I think.

@matu3ba
Copy link
Contributor

matu3ba commented Oct 16, 2023

We can pay the process overhead when compared to threading for reliability and simplicity I think.

I would prefer the process solution, because it feels cleaner to let the user provide the context on what to do like collecting messages and dump them on panic/non-zero return code etc.
However, until now, Andrew did no elaborate how user-provided runtime context/functionality (like comparisons) should work and/or if they should they be attachable to the unit test runner. See also #11472.

@andrewrk
Copy link
Member

No thanks.

Related: #5738

@andrewrk andrewrk closed this Mar 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants