-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe known inputs when input type not found in TestingRunnableNode #257
Describe known inputs when input type not found in TestingRunnableNode #257
Conversation
Signed-off-by: Adam Brennan <438217+acet@users.noreply.github.com>
Thank you for opening this pull request! 🙌 |
|
||
// Not matching inputs and there is nothing to run | ||
throw new IllegalArgumentException(message); | ||
} | ||
|
||
private String knownInputsString() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just being nosy because I want this improvement as well but I wonder if we should filter the fixedOutputs for keys of type InputT
input above. to noise? Specially for tasks with many inputs. If the mocks is wrong you’ll have to potentially skim through many inputs in the error message to see if it’s missing or the mocked parameters are wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @AndersonReyes thanks for the comment. Im afraid my understanding of flytekit is limited to my use of it, I'm not quite understanding what you mean here. Could you perhaps outline an example and I will try to create a test for it when I have a little time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually ignore me you are already doing it. For some reason I thought fixedOutputs here was all the mocked inputs for the entire workflow (and that’s why I said maybe we need filter) but it’s not. fixedOutputs is just for this runnable node which makes more sense but ignore me in comment above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thank you for your contribution, I think it will be appreciated for user during debugging :)
Congrats on merging your first pull request! 🎉 |
TL;DR
When attempting a workflow test, there can be failures setting up the test when incorrect parameters are used. It is not immediately obvious from the current exception message where the difference is between expected and actual.
Here I have included the known inputs that are checked when setting up the test so that the user has a better idea of what is wrong.
Type
Are all requirements met?
Complete description
I had to create workflow tests with several LaunchPlanOutput/TaskOutput for bigquery, gcs, dataflow jobs, etc. Because of the large number of parameters I was using it was helpful for me to debug
TestingRunnableNode
when I got the initial error message to compare my input with the known inputs. I thought a more detailed error message could be of use to someone facing the same problem.Tracking Issue
Remove the 'fixes' keyword if there will be multiple PRs to fix the linked issue
fixes https://github.com/flyteorg/flyte/issues/
Follow-up issue
NA
OR
https://github.com/flyteorg/flyte/issues/