-
Notifications
You must be signed in to change notification settings - Fork 16.3k
Make DataprocDeleteClusterOperator idempotent #60083
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shivannakarthik , @VladaZakharova and @MaksYermak Can you review now?
|
dataproc_delete_cluster_test.html System Test Logs - Updated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good from reading but really would be good having a review from one of the Google team members as well
|
cc: @VladaZakharova |
providers/google/src/airflow/providers/google/cloud/operators/dataproc.py
Outdated
Show resolved
Hide resolved
Requesting review : @MaksYermak @VladaZakharova |
|
Can anyone help / suggest why the test are failing? |
providers/google/src/airflow/providers/google/cloud/operators/dataproc.py
Outdated
Show resolved
Hide resolved
providers/google/tests/system/google/cloud/dataproc/example_dataproc_delete_cluster.py
Outdated
Show resolved
Hide resolved
providers/google/src/airflow/providers/google/cloud/operators/dataproc.py
Show resolved
Hide resolved
…pache#60259) * Initial plan * Add portForward section to _build_skaffold_config for API server Co-authored-by: jason810496 <68415893+jason810496@users.noreply.github.com> * Add kubectl section instead of portForward section * Fix: hooks should place under helm instead of kubectl --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
* refactor: Implemented BaseAsyncOperator in task-sdk * refactor: Now PythonOperator extends BaseAsyncOperator * refactor: Also implement BaseAsyncOperator in common-compat provider to support older Airflow versions --------- Co-authored-by: Jason(Zhe-You) Liu <68415893+jason810496@users.noreply.github.com>
…pache#59883) Extract the listeners infrastructure to `shared/listeners/` library to eliminate cross dependencies between airflow-core and task-sdk. - ListenerManager and hookimpl marker now in shared library - Hook specs split by callers: - shared: lifecycle, taskinstance (called from both sdk and core) - core: dagrun, asset, importerrors (called only from core) - sdk registers only specs it actually uses (lifecycle, taskinstance) - core registers all specs for full listener support
* Refactor airflow-core/tests cli commands to use SQLA2 * Refactor airflow-core/tests cli commands to use SQLA2
…)" (apache#60266) This reverts commit 9cab6fb.
…pache#60264) When installing Airflow 2 in Breeze, we need to add pydantic as extra, because pydantic in Airflow 2 was not a required dependency and installation of airflow even with constraints willl not downgrade pydantic to the version that was supported in Airflow 2. When we detect that airflow 2 is installed (either by specified version number or by retrieving the version from the dist package) we simply extend the extras with pydantic and that causes airflow installation to downgrade pydantic to the version that is specified in constraints of selected airflow version.
…tests. (apache#60027) Co-authored-by: Sameer Mesiah <smesiah971@gmail.com>
|
kindly approve workflows |
It seems that workflows were approved. |
|
Can we have review @VladaZakharova @shivannakarthik @shahar1 . |
|
My approval is a bit hesitated - IMO we should have an explicit flag for that (something like |
|
Awesome work, congrats on your first merged pull request! You are invited to check our Issue Tracker for additional contributions. |
|
Let's create a issue to update the documentation for the same accordingly. |
Feel free, but please keep in mind that specifically for documentation it's quicker just to create the PR directly instead of creating an issue for that :) |
Fixes : #59812 (comment)
In Case of Cluster does not exists , DataprocDeleteClusterOperator will simply log meaningfull message and will exit.