-
Notifications
You must be signed in to change notification settings - Fork 16.4k
Reduce "start-up" time for tasks in CeleryExecutor #11372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This is similar to apache#11327, but for Celery this time. The impact is not quite as pronounced here (for simple dags at least) but takes the average queued to start delay from 1.5s to 0.4s
|
|
||
|
|
||
| def _execute_in_fork(command_to_exec: CommandType) -> None: | ||
| pid = os.fork() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to fork it? Shouldn't we just execute it in current process (celery worker process)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't cos of the logging.shutdown() at the end of task_run (which we need to keep, as that's when remote logs are uploaded. #11327 (comment)
This is about is about a 20% speed up for short running tasks! This change doesn't affect the "duration" reported in the TI table, but does affect the time before the slot is freeded up from the executor - which does affect overall task/dag throughput. (All these tests are with the same BashOperator tasks, just running `echo 1`.) **Before** ``` Task airflow.executors.celery_executor.execute_command[5e0bb50c-de6b-4c78-980d-f8d535bbd2aa] succeeded in 6.597011625010055s: None Task airflow.executors.celery_executor.execute_command[0a39ec21-2b69-414c-a11b-05466204bcb3] succeeded in 6.604327297012787s: None ``` **After** ``` Task airflow.executors.celery_executor.execute_command[57077539-e7ea-452c-af03-6393278a2c34] succeeded in 1.7728257849812508s: None Task airflow.executors.celery_executor.execute_command[9aa4a0c5-e310-49ba-a1aa-b0760adfce08] succeeded in 1.7124666879535653s: None ``` **After, including change from apache#11372** ``` Task airflow.executors.celery_executor.execute_command[35822fc6-932d-4a8a-b1d5-43a8b35c52a5] succeeded in 0.5421732050017454s: None Task airflow.executors.celery_executor.execute_command[2ba46c47-c868-4c3a-80f8-40adaf03b720] succeeded in 0.5469810889917426s: None ```
#11373) * Spend less time waiting for LocalTaskJob's subprocss process to finish This is about is about a 20% speed up for short running tasks! This change doesn't affect the "duration" reported in the TI table, but does affect the time before the slot is freeded up from the executor - which does affect overall task/dag throughput. (All these tests are with the same BashOperator tasks, just running `echo 1`.) **Before** ``` Task airflow.executors.celery_executor.execute_command[5e0bb50c-de6b-4c78-980d-f8d535bbd2aa] succeeded in 6.597011625010055s: None Task airflow.executors.celery_executor.execute_command[0a39ec21-2b69-414c-a11b-05466204bcb3] succeeded in 6.604327297012787s: None ``` **After** ``` Task airflow.executors.celery_executor.execute_command[57077539-e7ea-452c-af03-6393278a2c34] succeeded in 1.7728257849812508s: None Task airflow.executors.celery_executor.execute_command[9aa4a0c5-e310-49ba-a1aa-b0760adfce08] succeeded in 1.7124666879535653s: None ``` **After, including change from #11372** ``` Task airflow.executors.celery_executor.execute_command[35822fc6-932d-4a8a-b1d5-43a8b35c52a5] succeeded in 0.5421732050017454s: None Task airflow.executors.celery_executor.execute_command[2ba46c47-c868-4c3a-80f8-40adaf03b720] succeeded in 0.5469810889917426s: None ```
This is similar to #11327, but for Celery this time.
The impact is not quite as pronounced here (for simple dags at least)
but takes the average queued to start delay from 1.5s to 0.4s
Closes #6905 - the config option added for LocalExecutor is used here too.
Data on this for a simple 10-task sequential DAG:
This was discovered in my general benchmarking and profiling of the scheduler for AIP-15, but it's not tied to any of that work. There are more of these kind of improvements coming, each unrelated but all add up.
^ Add meaningful description above
Read the Pull Request Guidelines for more information.
In case of fundamental code change, Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in UPDATING.md.