Skip to content

Conversation

@ashb
Copy link
Member

@ashb ashb commented Oct 9, 2020

This is similar to #11327, but for Celery this time.

The impact is not quite as pronounced here (for simple dags at least)
but takes the average queued to start delay from 1.5s to 0.4s

Closes #6905 - the config option added for LocalExecutor is used here too.

Data on this for a simple 10-task sequential DAG:

SELECT execution_date,
    min(start_date - queued_dttm) AS min_quued_delay,
    max(start_date - queued_dttm) AS max_queued_delay,
    avg(start_date - queued_dttm) AS avg
FROM task_instance
WHERE dag_id = 'scenario1_case2_03_1' GROUP BY execution_date;
execution_date min_quued_delay max_queued_delay avg with change?
2020-10-08 01:00:00+01 00:00:00.348837 00:00:00.473693 00:00:00.396751 Yes
2020-10-08 02:00:00+01 00:00:01.432304 00:00:01.574801 00:00:01.478422 No

This was discovered in my general benchmarking and profiling of the scheduler for AIP-15, but it's not tied to any of that work. There are more of these kind of improvements coming, each unrelated but all add up.


^ Add meaningful description above

Read the Pull Request Guidelines for more information.
In case of fundamental code change, Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in UPDATING.md.

This is similar to apache#11327, but for Celery this time.

The impact is not quite as pronounced here (for simple dags at least)
but takes the average queued to start delay from 1.5s to 0.4s
@ashb ashb added area:Scheduler including HA (high availability) scheduler area:performance labels Oct 9, 2020
@ashb ashb requested review from kaxil, mik-laj, potiuk and turbaszek October 9, 2020 09:54


def _execute_in_fork(command_to_exec: CommandType) -> None:
pid = os.fork()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to fork it? Shouldn't we just execute it in current process (celery worker process)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't cos of the logging.shutdown() at the end of task_run (which we need to keep, as that's when remote logs are uploaded. #11327 (comment)

@ashb ashb merged commit fe0bf6e into apache:master Oct 9, 2020
@ashb ashb deleted the speedup-celery-executor branch October 9, 2020 12:18
ashb added a commit to astronomer/airflow that referenced this pull request Oct 12, 2020
This is about is about a 20% speed up for short running tasks!

This change doesn't affect the "duration" reported in the TI table, but
does affect the time before the slot is freeded up from the executor -
which does affect overall task/dag throughput.

(All these tests are with the same BashOperator tasks, just running `echo 1`.)

**Before**

```
Task airflow.executors.celery_executor.execute_command[5e0bb50c-de6b-4c78-980d-f8d535bbd2aa] succeeded in 6.597011625010055s: None
Task airflow.executors.celery_executor.execute_command[0a39ec21-2b69-414c-a11b-05466204bcb3] succeeded in 6.604327297012787s: None

```

**After**

```
Task airflow.executors.celery_executor.execute_command[57077539-e7ea-452c-af03-6393278a2c34] succeeded in 1.7728257849812508s: None
Task airflow.executors.celery_executor.execute_command[9aa4a0c5-e310-49ba-a1aa-b0760adfce08] succeeded in 1.7124666879535653s: None
```

**After, including change from apache#11372**

```
Task airflow.executors.celery_executor.execute_command[35822fc6-932d-4a8a-b1d5-43a8b35c52a5] succeeded in 0.5421732050017454s: None
Task airflow.executors.celery_executor.execute_command[2ba46c47-c868-4c3a-80f8-40adaf03b720] succeeded in 0.5469810889917426s: None
```
ashb added a commit that referenced this pull request Oct 13, 2020
#11373)

* Spend less time waiting for LocalTaskJob's subprocss process to finish

This is about is about a 20% speed up for short running tasks!

This change doesn't affect the "duration" reported in the TI table, but
does affect the time before the slot is freeded up from the executor -
which does affect overall task/dag throughput.

(All these tests are with the same BashOperator tasks, just running `echo 1`.)

**Before**

```
Task airflow.executors.celery_executor.execute_command[5e0bb50c-de6b-4c78-980d-f8d535bbd2aa] succeeded in 6.597011625010055s: None
Task airflow.executors.celery_executor.execute_command[0a39ec21-2b69-414c-a11b-05466204bcb3] succeeded in 6.604327297012787s: None

```

**After**

```
Task airflow.executors.celery_executor.execute_command[57077539-e7ea-452c-af03-6393278a2c34] succeeded in 1.7728257849812508s: None
Task airflow.executors.celery_executor.execute_command[9aa4a0c5-e310-49ba-a1aa-b0760adfce08] succeeded in 1.7124666879535653s: None
```

**After, including change from #11372**

```
Task airflow.executors.celery_executor.execute_command[35822fc6-932d-4a8a-b1d5-43a8b35c52a5] succeeded in 0.5421732050017454s: None
Task airflow.executors.celery_executor.execute_command[2ba46c47-c868-4c3a-80f8-40adaf03b720] succeeded in 0.5469810889917426s: None
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:performance area:Scheduler including HA (high availability) scheduler

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants