-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make connector acceptance test gradle plugin work for destinations #22609
Make connector acceptance test gradle plugin work for destinations #22609
Conversation
/test connector=connectors/source-s3
Build FailedTest summary info:
|
/test connector=connectors/source-github |
/test connector=connectors/source-postgres
Build FailedTest summary info:
|
Airbyte Code Coverage
|
/test connector=connectors/source-github
Build PassedTest summary info:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like a reasonable change to me but it's not very clear why it's solving the import error p problem for destinations.
@@ -19,7 +19,7 @@ class AirbyteConnectorAcceptanceTestPlugin implements Plugin<Project> { | |||
'-w', "$targetMountDirectory", | |||
'-e', "AIRBYTE_SAT_CONNECTOR_DIR=${project.projectDir.absolutePath}", | |||
'airbyte/connector-acceptance-test:dev', | |||
'-p', 'integration_tests.acceptance', | |||
'--acceptance-test-config', targetMountDirectory, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you sure this isn't a red herring? I don't follow why this fixes the issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example with #22393 the acceptance tests run correctly when called this way but fail with “could not find module integration_tests.acceptance” with the old call.
I’m not sure why though, that’s why I pinged you on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I think I see the problem.
-p integration_tets.acceptance
tells the running pytest process (CAT in this case) to look at a file integration_tests/acceptance.py
to configure tests.
In this situation the working directory is always set to the connector's directory. Python connectors always have a integration_tests/acceptance.py
file, so this works for them. Even the java connectors which implement CAT also have that directory with an acceptance.py
file (eg see postgres).
That directory is important because it allows the user to configure setup/teardown actions e.g: create a database and tear it down.
I think the correct way to handle this would be to create an integration_tests/acceptance.py
directory for the connector in question, or maybe conditionally add this flag depending on whether that directory exists.
I think the --acceptance-test-config
arg is unnecessary. By default SAT looks for an acceptance-test-config.yaml
in the working directory (which is always set to the connector dir) so I think we can remove this arg.
@@ -19,7 +19,7 @@ class AirbyteConnectorAcceptanceTestPlugin implements Plugin<Project> { | |||
'-w', "$targetMountDirectory", | |||
'-e', "AIRBYTE_SAT_CONNECTOR_DIR=${project.projectDir.absolutePath}", | |||
'airbyte/connector-acceptance-test:dev', | |||
'-p', 'integration_tests.acceptance', | |||
'--acceptance-test-config', targetMountDirectory, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I think I see the problem.
-p integration_tets.acceptance
tells the running pytest process (CAT in this case) to look at a file integration_tests/acceptance.py
to configure tests.
In this situation the working directory is always set to the connector's directory. Python connectors always have a integration_tests/acceptance.py
file, so this works for them. Even the java connectors which implement CAT also have that directory with an acceptance.py
file (eg see postgres).
That directory is important because it allows the user to configure setup/teardown actions e.g: create a database and tear it down.
I think the correct way to handle this would be to create an integration_tests/acceptance.py
directory for the connector in question, or maybe conditionally add this flag depending on whether that directory exists.
I think the --acceptance-test-config
arg is unnecessary. By default SAT looks for an acceptance-test-config.yaml
in the working directory (which is always set to the connector dir) so I think we can remove this arg.
I went with this approach for now, running a test here #22393 (comment) to see whether it works |
@sherifnada OK, this seems to work. On the other PR where I configured the s3 destination to run acceptance tests I made the same changes as on this PR and the acceptance tests run correctly (showing the real problem with oneOf-usage): #22393 (comment) Running tests on a source still works correctly: #22393 (comment) To make sure the module is still picked up correctly when it's available, I broke the setup method here: 31d9c9b and as expected the tests failed with this error message: #22393 (comment) |
Oh yes you're right, this is what we had to do to make the acceptance run on java sources. |
…ctor-acceptance-test-plugin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI @clnoll in case this overlaps with your changes
…irbytehq#22609) Fix connector acceptance test gradle plugin
When trying to use the connector acceptance test plugin for destinations, it didn't execute correctly (see #22317 ) This change makes the gradle plugin call the acceptance test in the same way the
acceptance-test-docker.sh
file from the connector template does it.I don't notice any difference in what's actually run as part of the test, but maybe I'm missing something here.