-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
number validation #277
number validation #277
Conversation
change prompt when number checker fail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. Please see the comments below.
@pytest.mark.parametrize( | ||
"objective , input_data, expected_numbers", | ||
[ | ||
# ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these commented inputs active? If not, they should be removed. I think they make the test case very hard to understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they are active but running them all at the same time will cause them to fail @20001LastOrder . so if you have a suggestion a way to keep them since they hold different aspects of the test that would be nice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this is quite strange because each test cases are meant to be isolated. I'll try to allocate some time to see if I can figure out why this happens
It seems the failed tests are related to #287 and the commits I pushed could probably fix this issue as well. However, there is still one test failing for
It seems to be related to the way numbers are extracted. I got the following from the QA agent's response:
And these numbers were extracted
The error was that There were other errors I had to fix. Most of them were because the number validation fixed the flow of other tests. I think in general for new features, we should parameterize whether it should be run and set the default to not run to avoid too much influence on other test cases. It is always good to test everything with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM now. Let's get this merged as it blocks the hydra configuration and contains some other fixes.
No description provided.