-
Notifications
You must be signed in to change notification settings - Fork 14.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to create custom trigger rules through plugins #10758
Comments
Thanks for opening your first issue here! Be sure to follow the issue template! |
@maiorBoltach You can define custom trigger rules. See: https://airflow.readthedocs.io/en/latest/_api/airflow/models/index.html#airflow.models.BaseOperator.deps. It's not easy, but it'ss possible. |
@mik-laj |
You can use Cluster policy and monkey-patching to change all operators behavior at run-time. I agree that this is not the best solution, but it can take a long time to develop a final solution that changes a key concept. If there is no interested person to prepare the proposed changes and then discuss it on the mailing list, there is little chance that it will happen. The ticket application is only information that there is a need, but without a contribution, no ticket will be solved. |
closing in favor of #17010 |
Description
Create API fro creating new types of different trigger_rules like it's done for macros or views.
Use case / motivation
Currently, there are 8 different types of rules for triggering subsequent tasks, but they do not allow you to flexibly set more complex rules for the entire pipeline (not from the point of view of business requirements for a dagger, but technical ones).
For example, at the moment there is only 1 rule (all_done), which waits for all upstream tasks to finish. It is impossible to implement, for example, the following rule without complications:
[task1, task2, task3] >> task4.
Task 4 should be triggered if all tasks are done and one success.
We have to create more complicated dags:
[task1, task2, task3] >> check_task (all_done trigger_rule) >> task_4 .
check_task - python_operator, that checks all upstream tasks statuses. If there're all upstream task failed, task will raise new AirflowException to pass upstream_failed to task_4.
As you see, that's very simple example, but it require complicate solution.
There are more different cases like 'two_success' and different combinations with 'all done'
Related Issues
#1432
The text was updated successfully, but these errors were encountered: