Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
b1a6c94
Reuse hf_model list among tests to avoid slow loading
delock Oct 7, 2023
e68ba13
try to debug test skip
delock Oct 7, 2023
55e0e2a
another attempt to print test failure
delock Oct 7, 2023
d65d713
another attempt
delock Oct 7, 2023
951a363
more attempt to print skip reason
delock Oct 7, 2023
6ad85a2
revert changes that are temporary
delock Oct 7, 2023
0968632
remove extra flag for pytest
delock Oct 7, 2023
324dfb5
add a dummy test to test pytest
delock Oct 8, 2023
6ab32f8
test skip message
delock Oct 8, 2023
b2c0092
put old test and temp test together to compare
delock Oct 8, 2023
d995070
try to find out the reason skip message are not printed
delock Oct 8, 2023
4ab569c
comment all skips
delock Oct 8, 2023
f522aa7
check skip in common.py
delock Oct 8, 2023
7873048
revert last commits
delock Oct 8, 2023
ec33d1b
shorten name to show skip message
delock Oct 8, 2023
893aaf4
change test name
delock Oct 8, 2023
247870a
expand number of columns to 120 when running pytest
delock Oct 8, 2023
16dfe71
detect deepspeed installation
delock Oct 8, 2023
32a57bf
add test code for environment
delock Oct 8, 2023
707536b
change pytorch version 2.1.0==>2.0.1
delock Oct 8, 2023
33344b5
add py-cpuinfo as requiiremetns to dev
delock Oct 8, 2023
887656e
install py-cpuinfo manually
delock Oct 8, 2023
6963190
Change COLUMNS to 140 to allow display of pytest skip message
delock Oct 8, 2023
06fb34f
Merge branch 'gma/fix_cpu_inference' into gma/fix_cpu_inference_local
delock Oct 12, 2023
ffc8475
ping pytorch to 2.0.1
delock Oct 12, 2023
8fdd1f6
add pip list before install deepspeed
delock Oct 12, 2023
69707b4
install cpuinfo before install deepspeed
delock Oct 12, 2023
4f4f316
change workflow to work with pytorch 2.1
delock Oct 20, 2023
83bd562
add torch install to CI workflow
delock Oct 20, 2023
275bb65
install py-cpuinfo
delock Oct 20, 2023
fe600c0
enforce autotp test on single socket instance
delock Oct 20, 2023
858330c
enforce 2 ranks in cpu autotp tests
delock Oct 23, 2023
c05b6e5
enable tests that can only run on torch 2.1 or above
delock Oct 23, 2023
af20c6a
make build faster
delock Oct 23, 2023
643d99c
remove -j make option
delock Oct 23, 2023
1e01c67
add back skip for codegen
delock Oct 23, 2023
c9fc498
check UT result
delock Oct 24, 2023
666adf0
update tutorial
delock Oct 24, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 11 additions & 4 deletions .github/workflows/cpu-inference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,14 @@ jobs:

- name: Install oneCCL Bindings for PyTorch
run: |
pip install torch
python -m pip install intel_extension_for_pytorch
python -m pip install oneccl_bind_pt==2.0 -f https://developer.intel.com/ipex-whl-stable-cpu
python -m pip install oneccl_bind_pt -f https://developer.intel.com/ipex-whl-stable-cpu
pip install py-cpuinfo
# check installed version
pip list |grep \\\<torch\\\>
pip list |grep intel-extension-for-pytorch
pip list |grep oneccl-bind-pt

- name: Install oneCCL
run: |
Expand Down Expand Up @@ -79,6 +85,7 @@ jobs:
python -c "import torch;import intel_extension_for_pytorch as ipex;import oneccl_bindings_for_pytorch;print('done')"
python -c "import deepspeed;from deepspeed.accelerator import get_accelerator;print(get_accelerator().device_name());print(get_accelerator().is_available())"
unset TORCH_CUDA_ARCH_LIST # only jit compile for current arch
cd tests
COLUMNS=140 TRANSFORMERS_CACHE=~/tmp/transformers_cache/ TORCH_EXTENSIONS_DIR=./torch-extensions pytest -m 'seq_inference' unit/
COLUMNS=140 TRANSFORMERS_CACHE=~/tmp/transformers_cache/ TORCH_EXTENSIONS_DIR=./torch-extensions pytest -m 'inference_ops' -m 'inference' unit/
cd tests
# LOCAL_SIZE=2 enforce CPU to report 2 devices, this helps run the test on github default runner
LOCAL_SIZE=2 COLUMNS=240 TRANSFORMERS_CACHE=~/tmp/transformers_cache/ TORCH_EXTENSIONS_DIR=./torch-extensions pytest -m 'seq_inference' unit/
LOCAL_SIZE=2 COLUMNS=240 TRANSFORMERS_CACHE=~/tmp/transformers_cache/ TORCH_EXTENSIONS_DIR=./torch-extensions pytest -m 'inference_ops' -m 'inference' unit/
2 changes: 1 addition & 1 deletion docs/_tutorials/accelerator-abstraction-interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ To run DeepSpeed model on CPU, use the following steps to prepare environment:

```
python -m pip install intel_extension_for_pytorch
python -m pip install oneccl_bind_pt==2.0 -f https://developer.intel.com/ipex-whl-stable-cpu
python -m pip install oneccl_bind_pt -f https://developer.intel.com/ipex-whl-stable-cpu
git clone https://github.com/oneapi-src/oneCCL
cd oneCCL
mkdir build
Expand Down