Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add gherkin tests to verify evaluation details in finally hooks #290

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

chrfwow
Copy link

@chrfwow chrfwow commented Jan 14, 2025

This PR

Adds gherkin tests to verify the evaluation details passed to the finally hooks

Related Issues

Part of open-feature/.github#65, open-feature/php-sdk#140, open-feature/python-sdk#403, open-feature/dotnet-sdk#328, open-feature/java-sdk#1246, and open-feature/js-sdk#1109

Follow-up Tasks

Implement the test steps in the repos

Signed-off-by: christian.lutnik <christian.lutnik@dynatrace.com>
Signed-off-by: christian.lutnik <christian.lutnik@dynatrace.com>
@chrfwow chrfwow requested a review from toddbaert as a code owner January 14, 2025 12:00
@@ -57,11 +70,31 @@ Feature: Flag evaluation

# errors
Scenario: Flag not found
When a non-existent string flag with key "missing-flag" is evaluated with details and a default value "uh-oh"
When a hook is added to the client
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this maybe be a "Given"?

Given a client hook

@@ -48,6 +48,19 @@ Feature: Flag evaluation
Then the resolved object details value should be contain fields "showImages", "title", and "imagesPerPage", with values "true", "Check out these pics!" and 100, respectively
And the variant should be "template", and the reason should be "STATIC"

Scenario: Passes evaluation details to after and finally hooks
When a hook is added to the client
And a boolean flag with key "boolean-flag" is evaluated with details and default value "false"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this clashes with your other pr in writing, there are differences:

 Given a Boolean-flag with key "metadata-flag" and a default value "true"

hence we should rework this to follow the same structure as the metadata pr

@@ -57,11 +70,31 @@ Feature: Flag evaluation

# errors
Scenario: Flag not found
When a non-existent string flag with key "missing-flag" is evaluated with details and a default value "uh-oh"
When a hook is added to the client
And a non-existent string flag with key "missing-flag" is evaluated with details and a default value "uh-oh"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i do not think we need to deviate from the Given a Boolean-flag with key "metadata-flag" and a default value "true" format, if the flag is missing, we know that from the key, but i do not think we need to add the non-existent part

Then the default string value should be returned
And the reason should indicate an error and the error code should indicate a missing flag with "FLAG_NOT_FOUND"
Then "before, error" hooks should be called
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

each hook should be an own step, as it makes writing and reusing steps easier

Then the before hook should be called
And the error hook should be called

@@ -48,6 +48,19 @@ Feature: Flag evaluation
Then the resolved object details value should be contain fields "showImages", "title", and "imagesPerPage", with values "true", "Check out these pics!" and 100, respectively
And the variant should be "template", and the reason should be "STATIC"

Scenario: Passes evaluation details to after and finally hooks
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this part of the evaluation, or should those new hooks be part of an "hooks.feature" annotated with @hooks? i think this is a little bit separated from the default evaluation and more an own thing in the spec

@chrfwow
Copy link
Author

chrfwow commented Jan 16, 2025

I extracted the hook tests into their own file hooks.feature, and left the evaluation.feature file mostly as-is

@chrfwow chrfwow requested a review from aepfli January 16, 2025 12:56
Signed-off-by: christian.lutnik <christian.lutnik@dynatrace.com>
Copy link
Member

@aepfli aepfli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still have some suggestions, and most of them are related to my limited knowledge ;) but maybe some are worth investigating

@@ -3,7 +3,7 @@ Feature: Flag evaluation
# This test suite contains scenarios to test the flag evaluation API.

Background:
Given a provider is registered
Given a provider is registered with cache disabled
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Given a provider is registered with cache disabled
Given a stable provider

# This test suite contains scenarios to test the functionality of hooks.

Background:
Given a provider is registered with cache disabled
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Given a provider is registered with cache disabled
Given a stable provider

Given a client with added hook
And a boolean-flag with key "boolean-flag" and a default value "false"
When the flag was evaluated with details
Then "before" hooks should be called
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it one hook? or multiple ones, for me it looks more like

Then the "before" hook should be called

of to be more in sync with the already existing flagd feature files (https://github.com/open-feature/flagd-testbed/blob/main/gherkin/events.feature#L27)

Then the "before" hook should have been executed

but i am not sure about this naming

And a boolean-flag with key "boolean-flag" and a default value "false"
When the flag was evaluated with details
Then "before" hooks should be called
And "after, finally after" hooks should be called with evaluation details
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, maybe i am missing insight but what is after and finally after

should this maybe be like

And the "after" hook should be called
And the hook should receive evaluation details

the reason is, that this is somehow two verification steps, mixed into one. it makes sense if this happens more often, but the first part is actually already covered by the line above.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have after hooks that are called after every successful evaluation and we have finally after hooks that are called after every evaluation. Since this is a positive test, both of these hooks should be executed with the same evalution details

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The general recommendation for gherkin files is, to do less in one step so they are easier reusable. this is a tricky one, as i can see the benefit of both sides. and most likely we will see within the implemenation phase of the gherkin test if this is working as is, or if it would be easier to split it into dedicated steps per hooks. I am fine with it as is currently

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not want to duplicate this quite large table

@chrfwow chrfwow requested a review from aepfli January 16, 2025 14:34
Signed-off-by: christian.lutnik <christian.lutnik@dynatrace.com>
| string | variant | None |
| string | reason | ERROR |
| string | error_code | ErrorCode.TYPE_MISMATCH |
| string | error_message | Expected type <class 'int'> but got <class 'str'> |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to define the error message?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, the error message might be very implementation specific, so I agree with you. @aepfli @toddbaert what do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

open for both approaches, i do think that normalized error message could be handy, but we should remove any language specific synatx. eg "Expected type Integer but got String" so we can easier map this in other languages to a desired message. But i like the idea of standardized messages throughout the sdks.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the standardized error messages. However, I do think this would be a nontrivial change. Should we do this in a follow up issue? Then I would remove the error message from this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants