Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Composability of different threading runtimes #26950

Merged

Conversation

peterchen-intel
Copy link
Contributor

@peterchen-intel peterchen-intel commented Oct 8, 2024

Details:

  • Document composability of different threading runtimes when running inferences and other application logic on CPU device
  • Document threading impact for LLM with Optimum Intel API

Tickets:

@github-actions github-actions bot added the category: docs OpenVINO documentation label Oct 8, 2024
Signed-off-by: Chen, Peter <peter.chen@intel.com>
Signed-off-by: Chen, Peter <peter.chen@intel.com>

.. _Inference_threads_wait_actively:

Inference threads wait actively
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is generic threading tips page I would ask you to formalize the description in more format way:

  1. Case you are describing is an example of serial composability of different threading runtimes. So it should be name of the section.
  2. We need to formalize the application that we are focusing on: pipeline with multiple OV inferences interleaved with some other application logic (maybe calls to another library) executed sequentially.
  3. We need to describe the reason of performance issues in that scenarion. Actually you already shared some info about active searching for work which takes CPU resources. It worth to explicitly mentioned that it is true for both TBB and OMP so threads migration between areas will happen twice per pipeline iteration.
  4. 1ms is very specific to particular library - would avoid detailed numbers
  5. Lets describe all possible sotuion:
    5.1. Most effective is to use oneTBB for all computations made in pipeline
    5.2. Rebuilding OV with OMP from source is another option,
    5.3. Limit number of threads / disable pinning for OV or other parts of the pipeline to let OS do better scheduling.
    5.4. In case second runtime is OMP user can set OMP_WAIT_POLICY=PASSIVE to minimize perf gap on OMP->TBB runtime switch.

@wangleis Do you have anything to add?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, Thanks.

##########################

As mentioned on :ref:`Inference threads wait actively <Inference_threads_wait_actively>`, OpenVINO default threading library
oneTBB keeps CPU cores actively for 1ms after inference done. When using Optimum Intel Python API,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same: 1ms is very specific to particular library - would avoid detailed numbers

@peterchen-intel peterchen-intel changed the title [DOC] CPU inference threads [DOC] Composability of different threading runtimes Oct 16, 2024
OpenVINO is by default built with `oneTBB <https://github.com/oneapi-src/oneTBB/>`__ threading library,
oneTBB has a feature worker_wait like `OpenMP <https://www.openmp.org/>`__ `busy-wait <https://gcc.gnu.org/onlinedocs/libgomp/GOMP_005fSPINCOUNT.html>`__ which makes OpenVINO inference
threads wait actively for a while after task done. The intention is to avoid CPU inactive in the
tranaction time between tasks of inference.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tranaction?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to transition

- Most effective way is to use oneTBB for all computations made in pipeline.
- Rebuild OpenVINO with OpenMP and other application logic uses OpenMP as well.
- Limit number of threads of OpenVINO and other parts to let OS do better scheduling.
- Set environment variable `OMP_WAIT_POLICY <https://gcc.gnu.org/onlinedocs/libgomp/OMP_005fWAIT_005fPOLICY.html>`__ to PASSIVE which will disable OpenMP `busy-wait <https://gcc.gnu.org/onlinedocs/libgomp/GOMP_005fSPINCOUNT.html>`__
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still need to mentioned that other part on application should use OMP underneath

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. Please help to review again.

peterchen-intel and others added 3 commits October 20, 2024 20:18
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
@peterchen-intel peterchen-intel added this pull request to the merge queue Oct 26, 2024
Merged via the queue into openvinotoolkit:master with commit 0c07136 Oct 26, 2024
126 checks passed
@peterchen-intel peterchen-intel deleted the docs/cpu/threading branch October 26, 2024 07:13
CuriousPanCake pushed a commit to CuriousPanCake/openvino that referenced this pull request Nov 6, 2024
…26950)

### Details:
- *Document composability of different threading runtimes when running
inferences and other application logic on CPU device*
 - *Document threading impact for LLM with Optimum Intel API*

### Tickets:
 - *CVS-150542, CVS-145996*

---------

Signed-off-by: Chen, Peter <peter.chen@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: docs OpenVINO documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants