Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implemented TestNG-compatible XML formatter. #621

Closed
wants to merge 1 commit into from
Closed

Implemented TestNG-compatible XML formatter. #621

wants to merge 1 commit into from

Conversation

deadmoto
Copy link
Contributor

@deadmoto deadmoto commented Nov 4, 2013

Once again :)

Hi there!
This class produces TestNG-compatible XML report, including date\time, duration and stacktraces if possible.
I used JUnitFormatter as an example with some modifications and simplifications.
All tests call formatter directly without using CucumberRuntime.
What about wrapping Exceptions with Mockito, latter does not allow wraping of getClass method so I had to use the real Exception object...

@ffbit
Copy link
Contributor

ffbit commented Nov 4, 2013

It's a great point to start.

@ghost ghost assigned ffbit Nov 5, 2013
@brasmusson
Copy link
Contributor

@deadmoto I would like to discuss the not using CucumberRuntime in the test. Of course I am biased since I was the one that extracted out the support for testing formatters using the CucumberRuntime to TestHelper. I would appreciate any feedback that help make these methods more obvious and easy to use for other developers.

One draw black of not using the CucumberRuntime when testing the formatters is that they might stop working without you noticing it. It is not as far fetched as it sound, in v1.1.3 the PrettyFormatter stopped working and threw a IndexOutOfBoundsException (since the PrettyFormatter is part of the Gherkin library, the tests there cannot verify that the formatter actually will work when executed in Cucumber-JVM, which is an extra complication in that case).

Personally I really the explicitness of a test like JUnitFormatterTest.should_format_passed_scenario, the structure like "Given this feature, and some information about what steps passes, fails, and so on, then the formatter should produce this output". The formatter output is right there, and not in another file, which must be inspected to see the expected output (it is however a little annoying with all the '"' in the expected xml output).

WDYT?

@deadmoto
Copy link
Contributor Author

@brasmusson, sorry for such long delay.
The most significant reason not to use TestHelper for me was Exception wrapping.
Since Mockito uses CGLIB to generate proxy classes and CGLIB generates new classes on-the-fly every time, Exception class name could not be determined.

As for me, there is no difference between invoking formatter directly or through CucumberRuntime.
And the first way gives me more control in the code.
At the same time I agree with you that it will be great to have some kind of integration test.
It may be a feature containing a mix of different cases.

@brasmusson
Copy link
Contributor

@deadmoto The extra control from calling the formatter directly comes with a cost. The startOfScenarioLifeCycle() and endOfScenarioLifeCycle() methods are quite new to the formatter interface. If the TestNGFormatter been implemented before that, and then updated to make use of these new interface methods, there would have been 10 test cases to update. That is quite many test changes for a change which would be intended to produce the exactly the same output files. But I do see that the extraction of exception class name that the TestNGFormatter does, is not supported by the TestHelper methods.

Did you consider to expected result data in the test cases? To me, reading assertXmlEqual("cucumber/runtime/formatter/TestNGFormatterTest_testScenarioWithUndefinedSteps.xml", tempFile); is not very helpful to understand what the formatter does/should do. I tend to look at test case with the perspective; is all information right here to be able to say "yes, thats the functionality we want".

@deadmoto
Copy link
Contributor Author

@brasmusson I used JUnitFormatter as a base but implemented it after startOfScenarioLifeCycle() and endOfScenarioLifeCycle() methods were added. So these tests rely on current CucumberRuntime behavior.
If you don't like such approach I may rewrite test cases to use TestHelper (except those which need exact exception class name).
What for definiteness of test cases, I wrote them as the JUnitFormatter ones were written.
Could you point me to more self-explanatory test cases as an example?

@brasmusson
Copy link
Contributor

@deadmoto I used one of the very few changes in the formatter interface, to make the point overly clear. It was a bit unnecessary of me, I am confident that you see the pros and cons just as clear as I do. I do not say that you absolutely need to change the direct calls to formatter and use TestHelper instead, given the exception class name extraction in TestNGFormatter, that would need some work in TestHelper to support that (and BTW, it is good you used the startOfScenarioLifeCycle() and endOfScenarioLifeCycle() now that they exist).

I do think you should have a look at for instance JUnitFormatterTest.should_format_passed_scenario, and see if you think it would be an improvement to have the expected formatter output right there in the test case, instead of in a separate file.

When you used the JUnitFormatter as a base you got some handling of scenario outlines that is not really working well from the JUnitFormatter (I have just realized). Extract from the expected data for testScenarioOutlineWithExamples:

  <test-method name="scenario_2" ... />
  <test-method name="scenario_1" ... />

To me it looks like a simple solution for making the name attributes unique, start at the number of example row and count down. First it feels a bit unnatural to count down instead of count up. Secondly a scenario outline can have several set of examples (several example tables) and in that case you would get "scenario_2", "scenario_1", "scenario_2", "scenario_1 so the name attributes would not be unique after all. Thirdly the scenario outline header row can now contain argument tokens (#580), then the example scenario names would be unique by themselves. And finally (which may not be an issue for the TestNGFormatter), when executing feature files with the JUnit runner, the scenarioOutline() and examples() method will not be called on the formatter before the example scenarios are executed, so at least for the JUnitFormatter, relying on calls to the examples() method is not a good idea. I have just started to work a better solution for the JUnitFormatter.

@deadmoto
Copy link
Contributor Author

deadmoto commented Dec 6, 2013

I was on vacation these two weeks, so I had to spent some time reading this thread, looking into the code and rethinking it.

I took a look over JUnitFormatterTest.should_format_passed_scenario and I found it very convenient.
But I think that stepsToResult shall contain Result objects instead of String ones.
By doing so we can achieve more control over produced output, including duration and exceptions.
So the runFeatureWithJUnitFormatter() method attributes may be reduced to feature and steps-to-results mapping.

What for scenario outlines and certain executions names, the only proper way to prevent scenario name duplication and to keep their natural order is to put names of the scenarios being executed and the number of executions into some kind of Map<String, Integer>.

WDYT?

@brasmusson
Copy link
Contributor

Yes, using Result objects in stepsToResult is an option, and it should be a way to be able to specify the exception to be thrown to fail a step. But with the current setup with mocking the glue and inject it into the Runtime, the Result objects in stepsToResult will not be the same Result objects that are passed to the formatter. The Runtime creates the Result objects based on the behavior of the (mocked) glue and the (mocked) step definitions returned from the glue (for instance, if executing the step definition does not result in an exception, then a passed Result object is created and sent to the formatter). The execution time through a StopWatch instance (separate from the glue), currently the only StopWatch stub always return the same execution time. There would definitely be more work to make it controllable from the Result objects in the stepsToResult map.

When using scenario outlines to test the Rerun formatter, the fact that the different examples had different result was handled through that the resulting step string after expanding the parameterized steps are different and can be mapped to different results. I do not know how often it actually will be necessary to have the exact same step give different results i a single test case. Maybe a simple map from steps to result should be kept until the need arises in actual test cases.

@deadmoto
Copy link
Contributor Author

deadmoto commented Jan 7, 2014

It's a shame, I forgot about this issue!
There are some useful changes were made since 1.1.5 release and we are going to switch to 1.1.6 as soon as it will be released. So I returned to this issue again :)
Could you return to the point we started this discussion from and clarify what needs to be done to have this feature integrated into upstream?

@lock
Copy link

lock bot commented Oct 25, 2018

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Oct 25, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants