Skip to content

Commit

Permalink
2.1.30 docs update (#2849)
Browse files Browse the repository at this point in the history
  • Loading branch information
jingxu10 authored May 7, 2024
1 parent 819b0cb commit 3ab86c2
Show file tree
Hide file tree
Showing 49 changed files with 101 additions and 178 deletions.
2 changes: 1 addition & 1 deletion xpu/2.1.30+xpu/_sources/tutorials/contribution.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Once you implement and test your feature or bug-fix, submit a Pull Request to ht

## Developing Intel® Extension for PyTorch\* on XPU

A full set of instructions on installing Intel® Extension for PyTorch\* from source is in the [Installation document](../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu).
A full set of instructions on installing Intel® Extension for PyTorch\* from source is in the [Installation document](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu).

To develop on your machine, here are some tips:

Expand Down
73 changes: 15 additions & 58 deletions xpu/2.1.30+xpu/_sources/tutorials/features/DDP.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ DistributedDataParallel (DDP)

## Introduction

`DistributedDataParallel (DDP)` is a PyTorch\* module that implements multi-process data parallelism across multiple GPUs and machines. With DDP, the model is replicated on every process, and each model replica is fed a different set of input data samples. DDP enables overlapping between gradient communication and gradient computations to speed up training. Please refer to [DDP Tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for an introduction to DDP.
`DistributedDataParallel (DDP)` is a PyTorch\* module that implements multi-process data parallelism across multiple GPUs and machines. With DDP, the model is replicated on every process, and each model replica is fed a different set of input data samples. Please refer to [DDP Tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for an introduction to DDP.

The PyTorch `Collective Communication (c10d)` library supports communication across processes. To run DDP on GPU, we use Intel® oneCCL Bindings for Pytorch\* (formerly known as torch-ccl) to implement the PyTorch c10d ProcessGroup API (https://github.com/intel/torch-ccl). It holds PyTorch bindings maintained by Intel for the Intel® oneAPI Collective Communications Library\* (oneCCL), a library for efficient distributed deep learning training implementing such collectives as `allreduce`, `allgather`, and `alltoall`. Refer to [oneCCL Github page](https://github.com/oneapi-src/oneCCL) for more information about oneCCL.

Expand All @@ -14,63 +14,25 @@ To use PyTorch DDP on GPU, install Intel® oneCCL Bindings for Pytorch\* as desc
### Install PyTorch and Intel® Extension for PyTorch\*

Make sure you have installed PyTorch and Intel® Extension for PyTorch\* successfully.
For more detailed information, check [installation guide](../../../../index.html#installation).
For more detailed information, check [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu).

### Install Intel® oneCCL Bindings for Pytorch\*

#### Install from source:
#### [Recommended] Install from prebuilt wheels

Installation for CPU:
1. Install oneCCL package:

```bash
git clone https://github.com/intel/torch-ccl.git -b v2.1.0+cpu
cd torch-ccl
git submodule sync
git submodule update --init --recursive
python setup.py install
```

Installation for GPU:

- Clone the `oneccl_bindings_for_pytorch`

```bash
git clone https://github.com/intel/torch-ccl.git -b v2.1.300+xpu
cd torch-ccl
git submodule sync
git submodule update --init --recursive
```

- Install `oneccl_bindings_for_pytorch`

Option 1: build with oneCCL from third party

```bash
COMPUTE_BACKEND=dpcpp python setup.py install
```

Option 2: build without oneCCL and use oneCCL in system (Recommend)

We recommend to use apt/yum/dnf to install the oneCCL package. Refer to [Base Toolkit Installation](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) for adding the APT/YUM/DNF key and sources for first-time users.
We recommend using apt/yum/dnf to install the oneCCL package. Refer to [Base Toolkit Installation](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) for adding the APT/YUM/DNF key and sources for first-time users.

Reference commands:

```bash
sudo apt install intel-oneapi-ccl-devel=2021.11.1-6
sudo yum install intel-oneapi-ccl-devel=2021.11.1-6
sudo dnf install intel-oneapi-ccl-devel=2021.11.1-6
sudo apt install intel-oneapi-ccl-devel=2021.12.0-309
sudo yum install intel-oneapi-ccl-devel=2021.12.0-309
sudo dnf install intel-oneapi-ccl-devel=2021.12.0-309
```

Compile with commands below.

```bash
export INTELONEAPIROOT=/opt/intel/oneapi
USE_SYSTEM_ONECCL=ON COMPUTE_BACKEND=dpcpp python setup.py install
```

#### Install from prebuilt wheel:

Prebuilt wheel files for CPU, GPU with generic Python\* and GPU with Intel® Distribution for Python\* are released in separate repositories.
2. Install `oneccl_bindings_for_pytorch`

```
# Generic Python* for CPU
Expand All @@ -85,25 +47,19 @@ Installation from either repository shares the command below. Replace the place
python -m pip install oneccl_bind_pt --extra-index-url <REPO_URL>
```

### Runtime Dynamic Linking

- If torch-ccl is built with oneCCL from third party or installed from prebuilt wheel:
Dynamic link oneCCL and Intel MPI libraries:
#### Install from source

```bash
source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/setvars.sh
```
Refer to [Installation Guide](https://github.com/intel/torch-ccl/tree/ccl_torch2.1.300+xpu?tab=readme-ov-file#install-from-source) to install Intel® oneCCL Bindings for Pytorch\* from source.

Dynamic link oneCCL only (not including Intel MPI):
### Runtime Dynamic Linking

```bash
source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/vars.sh
```

- If torch-ccl is built without oneCCL and use oneCCL in system, dynamic link oneCCl from oneAPI basekit:
- dynamic link oneCCl from oneAPI basekit:

```bash
source <ONEAPI_ROOT>/ccl/latest/env/vars.sh
source <ONEAPI_ROOT>/mpi/latest/env/vars.sh
```

Note: Make sure you have installed [basekit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#base-kit) when using Intel® oneCCL Bindings for Pytorch\* on Intel® GPUs. If the basekit is installed with a package manager, <ONEAPI_ROOT> is `/opt/intel/oneapi`.
Expand Down Expand Up @@ -148,6 +104,7 @@ Dynamic link oneCCL and Intel MPI libraries:
source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/setvars.sh
# Or
source <ONEAPI_ROOT>/ccl/latest/env/vars.sh
source <ONEAPI_ROOT>/mpi/latest/env/vars.sh
```

`Example_DDP.py`
Expand Down
10 changes: 5 additions & 5 deletions xpu/2.1.30+xpu/_sources/tutorials/features/ipex_log.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,8 @@ All the usage are defined in `utils/LogUtils.h`. Currently Intel® Extension for
You can use `IPEX_XXX_LOG`, XXX represents the log level as mentioned above. There are four parameters defined for simple log:
- Log component, representing which part of Intel® Extension for PyTorch\* does this log belong to.
- Log sub component, input an empty string("") for general usages. For `SYNGRAPH` you can add any log sub componment.
- Log message template format string.
- Log name.
- Log message template format string, same as fmt_string in lib fmt, `{}` is used as a place holder for format args .
- Log args for template format string, args numbers should be aligned with size of `{}`s.

Below is an example for using simple log inside abs kernel:

Expand All @@ -48,14 +48,14 @@ IPEX_INFO_LOG("OPS", "", "Add a log for inside ops {}", "abs");

```
### Event Log
Event log is used for recording a whole event, such as an operator calculation. The whole event is identified by an unique `event_id`. You can also mark each step by using `step_id`. Use `IPEX_XXX_EVENT_END()` to complete the logging of the whole event.
Event log is used for recording a whole event, such as an operator calculation. The whole event is identified by an unique `event_id`. You can also mark each step by using `step_id`. Use `IPEX_XXX_EVENT_END()` to complete the logging of the whole event. `XXX` represents the log level mentioned above. It will be used as the log level for all logs within one single log event.

Below is an example for using event log:

```c++
IPEX_EVENT_END("OPS", "", "record_avg_pool", "start", "Here record the time start with arg:{}", arg);
IPEX_EVENT_LOG("OPS", "", "record_avg_pool", "start", "Here record the time start with arg:{}", arg);
prepare_data();
IPEX_EVENT_END("OPS", "", "record_avg_pool", "data_prepare_finish", "Here record the data_prepare_finish with arg:{}", arg);
IPEX_EVENT_LOG("OPS", "", "record_avg_pool", "data_prepare_finish", "Here record the data_prepare_finish with arg:{}", arg);
avg_pool();
IPEX_INFO_EVENT_END("OPS", "", "record_avg_pool", "finish conv", "Here record the end");
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Intel® Extension for PyTorch\* now empowers users to seamlessly harness graph c
- `intel_extension_for_pytorch` : > v2.1.10
- `triton` : [v2.1.0](https://github.com/intel/intel-xpu-backend-for-triton/releases/tag/v2.1.0) with Intel® XPU Backend for Triton* backend enabled.

Follow [Intel® Extension for PyTorch\* Installation](https://intel.github.io/intel-extension-for-pytorch/xpu/2.1.30+xpu/tutorials/installation.html) to install `torch` and `intel_extension_for_pytorch` firstly.
Follow [Intel® Extension for PyTorch\* Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu) to install `torch` and `intel_extension_for_pytorch` firstly.

Then install [Intel® XPU Backend for Triton\* backend](https://github.com/intel/intel-xpu-backend-for-triton) for `triton` package. You may install it via prebuilt wheel package or build it from the source. We recommend installing via prebuilt package:

Expand Down
2 changes: 1 addition & 1 deletion xpu/2.1.30+xpu/_sources/tutorials/getting_started.md.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Quick Start

The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu).
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu).

To start using the Intel® Extension for PyTorch\* in your code, you need to make the following changes:

Expand Down
2 changes: 1 addition & 1 deletion xpu/2.1.30+xpu/_sources/tutorials/installation.rst.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Installation
============

Select your preferences and follow the installation instructions provided on the `Installation page <../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu>`_.
Select your preferences and follow the installation instructions provided on the `Installation page <https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu>`_.

After successful installation, refer to the `Quick Start <getting_started.md>`_ and `Examples <examples.md>`_ sections to start using the extension in your code.

2 changes: 1 addition & 1 deletion xpu/2.1.30+xpu/_sources/tutorials/introduction.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ For the detailed list of supported features and usage instructions, refer to `Fe

Get Started
-----------
- `Installation <../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu>`_
- `Installation <https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu>`_
- `Quick Start <getting_started.md>`_
- `Examples <examples.md>`_

Expand Down
2 changes: 1 addition & 1 deletion xpu/2.1.30+xpu/genindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,7 @@ <h2 id="X">X</h2>
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
<jinja2.runtime.BlockReference object at 0x7f5a22a1afa0>
<jinja2.runtime.BlockReference object at 0x7f644cc8d2e0>
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>


Expand Down
2 changes: 1 addition & 1 deletion xpu/2.1.30+xpu/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ <h2>Support<a class="headerlink" href="#support" title="Permalink to this headin
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
<jinja2.runtime.BlockReference object at 0x7f5a207313d0>
<jinja2.runtime.BlockReference object at 0x7f644f1f47c0>
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>


Expand Down
2 changes: 1 addition & 1 deletion xpu/2.1.30+xpu/search.html
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
<jinja2.runtime.BlockReference object at 0x7f5a24fe2d60>
<jinja2.runtime.BlockReference object at 0x7f644cd3f6a0>
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>


Expand Down
2 changes: 1 addition & 1 deletion xpu/2.1.30+xpu/searchindex.js

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion xpu/2.1.30+xpu/tutorials/api_doc.html
Original file line number Diff line number Diff line change
Expand Up @@ -1181,7 +1181,7 @@ <h2>C++ API<a class="headerlink" href="#c-api" title="Permalink to this heading"
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
<jinja2.runtime.BlockReference object at 0x7f5a207d2070>
<jinja2.runtime.BlockReference object at 0x7f644f1f9eb0>
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>


Expand Down
2 changes: 1 addition & 1 deletion xpu/2.1.30+xpu/tutorials/blogs_publications.html
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ <h1>Blogs &amp; Publications<a class="headerlink" href="#blogs-publications" tit
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
<jinja2.runtime.BlockReference object at 0x7f5a207d2b50>
<jinja2.runtime.BlockReference object at 0x7f644f1f9d30>
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>


Expand Down
Loading

0 comments on commit 3ab86c2

Please sign in to comment.