-
Notifications
You must be signed in to change notification settings - Fork 16.3k
Add pdbs for triggerer and workers #59068
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contributors' Guide (https://github.com/apache/airflow/blob/main/contributing-docs/README.rst)
|
e7c54a9 to
f9cdf54
Compare
|
Hi @mrrsm, could you add a test cases? |
jscheffl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution. I think this makes sense and per default it is disabled which is fine. I am not sure when enabled that the defautls are also fine but depends very much on the deployment and scope.
I approve and keep it open for 1-2 days for another pair of eyes, else can be merged in my view.
Miretpl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general looks good, small nits.
Also, one question to @jscheffl or/and @jedcunningham. PR #52357 introduced the workers.kubernetes and workers.celery sections. Changes in this PR will only affect the workers created by CeleryExecutor. So maybe this and similar new values should be placed under proper sections (maybe it is worth to discuss on devlist)? What do you think?
|
@Miretpl Thank you for the review/changes. In an effort to not add a different way to do PDBs should I also update the PDB's and tests that already exist for other components (scheduler, api-server, etc) to match the style in this PR and also update the tests to remove the unneeded kubernetes version test? |
|
@mrrsm, consistency across the entire Helm Chart would be something nice to have, but I believe that combining everything in one PR can become a little harder to merge (based on my experience). Personally, I would probably do 2 additional PRs after this one:
|
|
@Miretpl as of the updated, previously you requested changes - OK to merge? (In my view it is good) |
Miretpl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jscheffl in general looks good to me. I have just placed one comment that should resolve the tests. If they are green, we can merge it
Co-authored-by: Przemysław Mirowski <miretpl@gmail.com>
Co-authored-by: Przemysław Mirowski <miretpl@gmail.com>
Co-authored-by: Przemysław Mirowski <miretpl@gmail.com>
Co-authored-by: Przemysław Mirowski <miretpl@gmail.com>
|
Awesome work, congrats on your first merged pull request! You are invited to check our Issue Tracker for additional contributions. |
* fix(chart): Add pdbs for triggerer and workers * fix(chart): Add pdbs for triggerer and workers * Update chart/templates/triggerer/triggerer-poddisruptionbudget.yaml Co-authored-by: Przemysław Mirowski <miretpl@gmail.com> * Update chart/templates/triggerer/triggerer-poddisruptionbudget.yaml Co-authored-by: Przemysław Mirowski <miretpl@gmail.com> * Update chart/templates/workers/worker-poddisruptionbudget.yaml Co-authored-by: Przemysław Mirowski <miretpl@gmail.com> * fix: Remove unneeded tests * Update chart/templates/triggerer/triggerer-poddisruptionbudget.yaml Co-authored-by: Przemysław Mirowski <miretpl@gmail.com> --------- Co-authored-by: Przemysław Mirowski <miretpl@gmail.com>
Description
This will add the ability to set pod disruption budget resources for either the triggerer and/or worker pods. These are disabled by default as to not cause a breaking change in the helm chart.
Why
We are running multiple instances of triggerer pods as well as worker pods. When we have a cluster event that may take out a node having the PDB allows us to not have multiple instances of the same pods go down at the same time.