Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added Test stats to execution context #2078

Closed
wants to merge 2 commits into from
Closed

added Test stats to execution context #2078

wants to merge 2 commits into from

Conversation

pierreh
Copy link

@pierreh pierreh commented Mar 22, 2019

#1485
#1692

I would like to propose this pull request. I built a framework on top of AVA that allows you to execute individual steps within a Test. However I wanted to have more control in order to stop steps from executing when previous steps have failed and to give debug feedback about which steps have executed and which have not. In order to do this I had to hack Object.bind to get to the Test object, however this patch would expose the necessary information that tests can use to inspect the current state of testing.

@novemberborn
Copy link
Member

Hi @pierreh, thanks for the PR. It's an interesting approach, could you share more about your framework? I want to be careful about how much internal state we expose to the test implementations.

t.try() (#1692) is how we're looking to address use cases like yours. Would that work for you?

(I'm closing this issue for housekeeping purposes, but let's keep the conversation going. Either in this PR or in a new issue.)

@pierreh
Copy link
Author

pierreh commented Mar 25, 2019

Hi @novemberborn, thanks for your reply. I think the t.try() solution can work for my use case. Is there any implementation for it yet?
What I have created uses ava-spec to create features and scenario's, but on top of that each scenario is built up from individual steps on order to create a BDD style spec file. The steps are implemented in a separate file and are reusable. The steps are a collection of given when then actions. So for my particular case I would need to run t.try() on each step. So the implementation will need to allow me to execute it multiple times during a test. It also integrates supertest/superagent because we are using it to test our API's.

This is what a spec file looks like:

feature("Log in ›", scenario => {
	scenario("a bureau service user", t => steps(t)
		.step(given.aBureauServiceUser)
		.step(when.theUserLogsIn)
		.step(expect.loginIsSuccessful)
		.step(when.theUserLogsOut)
		.step(expect.userIsLoggedOut));
}

This is what the results look like (with my patch)

✖ Log out › a bureau service user Unexpected response code (401): Username or password incorrect
    ℹ step: aBureauServiceUser
    ℹ step: theUserLogsIn
    ℹ failed: loginIsSuccessful
    ℹ skipped: theUserLogsOut
    ℹ skipped: userIsLoggedOut

@pierreh
Copy link
Author

pierreh commented Mar 25, 2019

The reason why I like my solution is that you can query the results at any time during testing. Also the t.try() solution would have to spawn a nested test context which seems somewhat illogical. The stats don't necessarily need to expose that much, perhaps just assertCount and assertError

@novemberborn
Copy link
Member

@pierreh there's an open PR, it's just taking a while to land it: #1947

I like your use case, and I think t.try() is a good fit for it especially since it comes with its own DSL. I'd like to try that first before seeing if we need to expose other details within the main test implementation.

@pierreh
Copy link
Author

pierreh commented Apr 1, 2019

Thank you, look forward to it landing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants