-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate std.log with std.testing #5738
Comments
This is called white box testing. I'd implemented once in C (inspired by http://akkartik.name/post/tracing-tests ). Here are my recommendations:
|
by default the test runner will only print logs with "warning" or higher. this can be configured via the std.testing API. See #5738 for future plans
I'm unclear on what you mean by 2 on your list. Are you saying the string in the test and the string in the log needn't match exactly? That seems like a misfeature. I think number 3 is probably better done by code coverage tools than looking at logs in tests. |
I would propose not using a special testing scope, since the log statements to test for might also be useful outside of testing. I have several log.debug statements in my own projects that I could write tests for with this proposal, but are also still useful for general debugging; when using a special testing scope, I'd have to duplicate them. E.g. in |
A subset of @PavelVozenilek's number 3 could be achieved by having the tracy integration perform some logging as well, maybe? It's not automatic, but if you already have the tracy statements added anyway, they could enable you to test whether these are called in the way you expect them to be. |
Under this proposal, will it still be possible to use |
It might be valuable to introduce a scope specific to testing – or a different function to log via – that gets ignored by expectations/assertions so that they do not interfere with the test expectations, but is still outputted when the test runs. At least one other language has had success with this: Go, where test logs get slurped up and accumulated with the test, and where it is trivial as a test author to add additional details that help with debugging or understanding what is happening in a test. |
This is underspecified and the memory costs can become nontrivial. Should there be a dedicated api for bounded and unbounded memory backed files user space/Kernel buffers(pipes) ? |
Another subfeature here would be:
Failing on error logged is probably the right default, but one also sometimes want to write the tests for error conditions! After all, the error handling paths contain most bugs. Today, I use the following work-around for that: const log = if (builtin.is_test)
// Downgrade `err` to `warn` for tests.
// Zig fails any test that does `log.err`, but we want to test those code paths here.
struct {
const base = std.log.scoped(.clock);
const err = warn;
const warn = base.warn;
const info = base.info;
const debug = base.debug;
}
else
std.log.scoped(.clock); |
I just ran into this issue myself. The use case is indeed failure testing, i.e. I want to test the unhappy path to be sure that it's handled correctly. Thanks @matklad for the workaround, I may go that route for now. The alternatives seem to be: Just don't log (not great), or take some kind of Side note, this behavior of the test runner seems contrary to the documented meaning of
If it may be recoverable, then it should not imply automatic failure. The current behavior of the test runner also prevents any testing of code that performs such error recovery. Because, despite the recovery, the fact that something invoked |
Error recovery sounds very much like a component test (kill and restart a component) and not for unit tests, similar to what a panic test would do.
The current test runner, including server and client, is designed as unit test runner. If you intend to change it, then you have to also take into account how the unhappy path (crashing) with user-expected output parsing of the failure message/logs should work inside test blocks. |
Inspired by #5734 (comment)
The goal of the
std.log
API is to allow developers to leave log statements in finished code, without causing any performance issues or unwanted stderr/stdout for API consumers.This harmonizes with testing! Here's an example:
Much like the test runner clears the testing allocator and checks for leaks between tests, it can also specify a custom log handler, which buffers the log statements during tests, and provides the API to make sure log messages did or did not occur.
It could provide an API for exact matches, substring matches, or checking for the absence of a log message.
An open question would be how the "scope" parameter fits into this. It could be part of the API for checking, or maybe there would be a
testing
scope, which by default would never get printed and only would be activated for tests.The text was updated successfully, but these errors were encountered: