Skip to content

Commit

Permalink
Jingxu10/example restructure main (#2958)
Browse files Browse the repository at this point in the history
* restruct example directories

* add jupyter notebook of IntelPytorch Inference AMX BF16 and INT8

* mv 2 examples from onesample (#2787)

* mv 2 examples from onesample

* fix license format

* add jupyter notebook readme

* move oneAPI IPEX inference sample optimize (#2798)

* clear output of notebook

* Update example.

Add example 'complete flag'

* update readme, remove aikit and refer ipex installation guide

* remove installation part in jupyter notebook

* remove installation part in jupyter notebook and add kernel select

* each sample use conda env seperately

* Update cpu example jupyter nootbook README

* rm install jupyter and refer to readme, fix table format

* Create IPEX_Getting_Started.ipynb

* Create IntelPytorch_Quantization.ipynb

* remove training examples

* fix lint issues

---------

Co-authored-by: Zheng, Zhaoqiong <zhaoqiong.zheng@intel.com>
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com>
Co-authored-by: Wang, Xigui <xigui.wang@intel.com>
Co-authored-by: yqiu-intel <113460727+YuningQiu@users.noreply.github.com>
  • Loading branch information
6 people authored Jun 4, 2024
1 parent 57321eb commit 27f9974
Show file tree
Hide file tree
Showing 33 changed files with 3,059 additions and 199 deletions.
45 changes: 0 additions & 45 deletions docs/tutorials/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,51 +25,6 @@ Before running these examples, please note the following:

### Training

#### Single-instance Training

To use Intel® Extension for PyTorch\* on training, you need to make the following changes in your code:

1. Import `intel_extension_for_pytorch` as `ipex`.
2. Invoke the `ipex.optimize` function to apply optimizations against the model and optimizer objects, as shown below:


```python
...
import torch
import intel_extension_for_pytorch as ipex
...
model = Model()
criterion = ...
optimizer = ...
model.train()
# For Float32
model, optimizer = ipex.optimize(model, optimizer=optimizer)
# For BFloat16
model, optimizer = ipex.optimize(model, optimizer=optimizer, dtype=torch.bfloat16)
# Invoke the code below to enable beta feature torch.compile
model = torch.compile(model, backend="ipex")
...
optimizer.zero_grad()
output = model(data)
...
```

Below you can find complete code examples demonstrating how to use the extension on training for different data types:

##### Float32

**Note:** You need to install `torchvision` Python package to run the following example.

[//]: # (marker_train_single_fp32_complete)
[//]: # (marker_train_single_fp32_complete)

##### BFloat16

**Note:** You need to install `torchvision` Python package to run the following example.

[//]: # (marker_train_single_bf16_complete)
[//]: # (marker_train_single_bf16_complete)

#### Distributed Training

Distributed training with PyTorch DDP is accelerated by oneAPI Collective Communications Library Bindings for Pytorch\* (oneCCL Bindings for Pytorch\*). The extension supports FP32 and BF16 data types. More detailed information and examples are available at the [Github repo](https://github.com/intel/torch-ccl).
Expand Down
Empty file.
Loading

0 comments on commit 27f9974

Please sign in to comment.