Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for data driven tests #102

Open
csoltenborn opened this issue Feb 7, 2017 · 5 comments
Open

Add support for data driven tests #102

csoltenborn opened this issue Feb 7, 2017 · 5 comments

Comments

@csoltenborn
Copy link
Owner

Data driven tests appear to be VisualStudio's notion of Google's parameterized tests. We have to figure out:

  • can instances of data driven tests still be executed on selection, or is it only possible to run the base test with all data? (if the latter is true, we need to make support optional, which might be a good idea anyways)
  • how do we attach the parameter values to the TestResult objects via the properties bag? How does a TestProperty object have to look like such that it is displayed properly in the test explorer?
@csoltenborn csoltenborn added this to the 0.10 milestone Feb 12, 2017
@csoltenborn csoltenborn removed this from the 0.10 milestone Apr 18, 2017
@frboyer
Copy link
Contributor

frboyer commented Jul 10, 2018

Using online examples, and doing some tests myself, I see there are at least two ways to present data driven tests in Test Explorer : what I got with MSTest and with xUnit. See screenshot:
datadriventests_xunit_vs_mstest

  • In MSTest, the data driven test is shown as a single test, and thus it is not possible to select only part of the data to run the test. And the values used are not displayed in the results, only the "row number" (maybe a test adapter could put the values there instead of a row number, I have not tested).
  • In xUnit, closer to what we are used to in Google Test, each tested value is shown separately, thus we can select for which value to run the test, and they are grouped together under the method name (giving the 5th hierarchy level I was hoping for).

@tapika
Copy link

tapika commented Mar 6, 2020

Theoretically this can be solved by recombining this pull request:

#274
Google test adapter, support for dynamically named tests.

with this pull request:

google/googletest#2253
Google test, mechanism to dynamically create new tests at run-time.

Everything new is long forgotten old. ;-)

@csoltenborn
Copy link
Owner Author

The idea behind this issue is slightly different: Currently, parameterized gtest tests are "flattened" and registered with VS as "normal" tests. However, the VS test framework has an own notion of parameterized tests called data-driven tests (they are linked at the very top of the issue). This issue only is about mapping the gtest notion into the VsTest notion.

I have never worked on this issue because, as @frboyer has pointed out above, it would not be possible to run single tests any more (in contrast to the current way). Maybe one should double-check if this is still the case because VS' test explorer has advanced quite a bit since 2018...

@tapika
Copy link

tapika commented Mar 8, 2020

Ok. This coechoes with what I'm trying to create on my own currently.

With two changes above you can free naming and free source code location to any tests which you write.

But concerning data driven unit tests - I think requirement will go outside of normal google unit test - since google unit test have a mechanism to verify one parameter against second one - e.g. EXPECT_TRUE(true), but if you want to run that one in for loop, that might be tricky.

for(int i = 0; i < 10; i++ )
{
    EXPECT_TRUE(i == i);
}

or even: 

for(int i = 0; i < 10; i++ )
{
    switch(i)
    {
        case 0:
           EXPECT_TRUE(i == 0);
           break;
        ...
    }
}

And that goes crazy after it's not i anymore and direct mapping from key to value is not possible
or too complex.

I had simpler idea. My idea is that application itself has built-in logging mechanism and
test application could control log level, and log writing would be performed on first run,
and log verification would be performed on second run.

As a base logging facility I'm using spdlog, see https://github.com/gabime/spdlog.

I've selected spdlog among other loggers because of it's speed and it's also covered by unit testing
(which is important to me)

See also:
English: https://weekly-geekly.github.io/articles/313686/index.html
Russian: https://habr.com/ru/post/313686/

My idea is that application will have built in global field for log activation - for example:

int g_traceLevel = 0;

which would be by default disabled.

Code will be instrumented using bit field definition, for example:

typedef enum
{
    traceObjectMovement = 1,
    traceObjectConstruction = 2,
    traceObjectDestruction = 4,
    ...
}ETraceLevel;

and code would be instrumented like this:

using namespace spdlog;

if( g_traceLevel & traceObjectMovement )
{
    info("Moving object 1 to 1.2, 4.5, 5.6"); // Example, will actually contain relevant movement coordinates, also object id.
}

This in a turn would mean that release / production code will not have any tracing / logging as an active.

Test application in a turn would look something like this:

TEST(ObjectMovement, Test1)
{
    g_traceLevel = traceObjectMovement;
    bool recordExecutioOutput = !exists("verification.log");
    // activate spdlog either logging or verification functionality based on existence of previous file

    normalApplicationFunctionality();
}

As for verification functionality, I have made sink similar to basic file sink, only instead of writing log, it performs also reading of log file and comparison.

https://github.com/gabime/spdlog/blob/v1.x/include/spdlog/sinks/basic_file_sink.h
https://github.com/gabime/spdlog/blob/v1.x/include/spdlog/sinks/basic_file_sink-inl.h

First test application run would produce one or many logs, logs will be stored in version control system. In CI environment logs won't be recorded anymore, but verification against existing log files will be performed.

There are still more open questions, which I want to tackle later on - for example how to deal with multi-threading, and what to log, so that logging would always contain only relevant information and also can be used for testing as well. Also making sure that log level would be more or less "atomic", meaning that you cannot add or remove additional log traces later on without good reason.

I guess one more thing needs to be mention - I'm not using anymore any google unit test EXPECT_* macros - exception throw will be performed only in one place - that is in verification_file_logger_sink, which can be threaten as test failure directly.

Good thing about this - is that if you place breakpoint, where it will try to throw exception - you will halt at that very point where execution differs - with good luck - you can check call stack and all local variables to identify what is the problem - with not so good luck - you will be still close to problematic point.

If you're dealing with relatively complex application - reaching correct problematic point of execution can be very refreshing - especially if you're not familiar with code, or even if you're familiar, but cannot say what is wrong.

@tapika
Copy link

tapika commented Mar 8, 2020

Logging binary data instead of textual

One more thing - sometimes you might want to gather more complex log line with more relevant information - for example, like this:

string traceLine;

if( g_traceLevel & traceObjectCoordinates )
{
   traceLine = fmt::format("{:.2f}, {:.2f}, {:.2f}", x, y, z);
}
...
if( g_traceLevel & traceObjectCoordinates )
{
   info("object: {} coordinates: {}", name, traceLine);
}

In theory it's also possible that at some point of time it would make sense to switch to binary formatting instead of textual - as log files will consume less disk space and it will be faster to produce them. But binary log files would require separate binary log viewer, which in a turn more complex to support. Also when checking version control history back (e.g. git history) - it's more difficult to check binary differences than ascii differences.

So suspect it's possible to achieve , based on examples in here:
fmtlib/fmt#1271

But then need to think but deeper how arguments "collecting" would happen in application - instead of string - see code above.

So suspect binary "collecting" is possible to support, but need separate binary viewer and need to have mechanism to collect traces instead of string.

Logging floating points, precision

My idea that even if you transfer coordinates - you loose some precision (E.g. 2 decimals after comma), but it's not so relevant in most cases.

Logging binary data

If you trace binary, mostly first 10 bytes tracing is sufficient to identify if binary data is the same or not. If insufficient - then can use one of binary hashing algorithm and log only hash instead of binary.

uint64_t hash = xxh::xxhash<64>(buf);

For more info, see:
https://aras-p.info/blog/2016/08/09/More-Hash-Function-Tests/

Logging float will whole precision consumes more disk space, which is in most cases not needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants