Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

temp. resume github premerge tests #2853

Merged
merged 3 commits into from
Aug 26, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
130 changes: 130 additions & 0 deletions .github/workflows/pythonapp-gpu.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
name: build-gpu

on:
# quick tests for pull requests and the releasing branches
push:
branches:
- dev
- main
- releasing/*
pull_request:

concurrency:
# automatically cancel the previously triggered workflows when there's a newer version
group: build-gpu-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

jobs:
GPU-quick-py3: # GPU with full dependencies
if: github.repository == 'Project-MONAI/MONAI'
strategy:
matrix:
environment:
- "PT16+CUDA110"
- "PT17+CUDA102"
- "PT17+CUDA110"
- "PT18+CUDA102"
- "PT19+CUDA113"
- "PT19+CUDA102"
include:
- environment: PT16+CUDA110
# we explicitly set pytorch to -h to avoid pip install error
pytorch: "-h"
base: "nvcr.io/nvidia/pytorch:20.07-py3"
- environment: PT17+CUDA102
pytorch: "torch==1.7.1 torchvision==0.8.2"
base: "nvcr.io/nvidia/cuda:10.2-devel-ubuntu18.04"
- environment: PT17+CUDA110
# we explicitly set pytorch to -h to avoid pip install error
pytorch: "-h"
base: "nvcr.io/nvidia/pytorch:20.09-py3"
- environment: PT18+CUDA102
pytorch: "torch==1.8.1 torchvision==0.9.1"
base: "nvcr.io/nvidia/cuda:10.2-devel-ubuntu18.04"
- environment: PT19+CUDA113
# we explicitly set pytorch to -h to avoid pip install error
pytorch: "-h"
base: "nvcr.io/nvidia/pytorch:21.06-py3"
- environment: PT19+CUDA102
pytorch: "torch==1.9.0 torchvision==0.10.0"
base: "nvcr.io/nvidia/cuda:10.2-devel-ubuntu18.04"
container:
image: ${{ matrix.base }}
options: --gpus all
runs-on: [self-hosted, linux, x64, common]
steps:
- uses: actions/checkout@v2
- name: apt install
run: |
if [ ${{ matrix.environment }} = "PT17+CUDA102" ] || \
[ ${{ matrix.environment }} = "PT18+CUDA102" ] || \
[ ${{ matrix.environment }} = "PT19+CUDA102" ]
then
PYVER=3.6 PYSFX=3 DISTUTILS=python3-distutils && \
apt-get update && apt-get install -y --no-install-recommends \
curl \
pkg-config \
python$PYVER \
python$PYVER-dev \
python$PYSFX-pip \
$DISTUTILS \
rsync \
swig \
unzip \
zip \
zlib1g-dev \
libboost-locale-dev \
libboost-program-options-dev \
libboost-system-dev \
libboost-thread-dev \
libboost-test-dev \
libgoogle-glog-dev \
libjsoncpp-dev \
cmake \
git && \
rm -rf /var/lib/apt/lists/* && \
export PYTHONIOENCODING=utf-8 LC_ALL=C.UTF-8 && \
rm -f /usr/bin/python && \
rm -f /usr/bin/python`echo $PYVER | cut -c1-1` && \
ln -s /usr/bin/python$PYVER /usr/bin/python && \
ln -s /usr/bin/python$PYVER /usr/bin/python`echo $PYVER | cut -c1-1` &&
curl -O https://bootstrap.pypa.io/get-pip.py && \
python get-pip.py && \
rm get-pip.py;
fi
- name: Install dependencies
run: |
which python
python -m pip install --upgrade pip wheel
python -m pip install ${{ matrix.pytorch }}
python -m pip install -r requirements-dev.txt
python -m pip list
- name: Run quick tests (GPU)
run: |
git clone --depth 1 \
https://github.com/Project-MONAI/MONAI-extra-test-data.git /MONAI-extra-test-data
export MONAI_EXTRA_TEST_DATA="/MONAI-extra-test-data"
nvidia-smi
export LAUNCH_DELAY=$(python -c "import numpy; print(numpy.random.randint(30) * 10)")
echo "Sleep $LAUNCH_DELAY"
sleep $LAUNCH_DELAY
export CUDA_VISIBLE_DEVICES=$(coverage run -m tests.utils)
echo $CUDA_VISIBLE_DEVICES
trap 'if pgrep python; then pkill python; fi;' ERR
python -c $'import torch\na,b=torch.zeros(1,device="cuda:0"),torch.zeros(1,device="cuda:1");\nwhile True:print(a,b)' > /dev/null &
python -c "import torch; print(torch.__version__); print('{} of GPUs available'.format(torch.cuda.device_count()))"
python -c 'import torch; print(torch.rand(5, 3, device=torch.device("cuda:0")))'
python -c "import monai; monai.config.print_config()"
# build for the current self-hosted CI Tesla V100
BUILD_MONAI=1 TORCH_CUDA_ARCH_LIST="7.0" ./runtests.sh --quick --unittests
if [ ${{ matrix.environment }} = "PT19+CUDA102" ]; then
# test the clang-format tool downloading once
coverage run -m tests.clang_format_utils
fi
coverage xml
if pgrep python; then pkill python; fi
shell: bash
- name: Upload coverage
uses: codecov/codecov-action@v1
with:
file: ./coverage.xml
3 changes: 2 additions & 1 deletion tests/test_gaussian_filter.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
from parameterized import parameterized

from monai.networks.layers import GaussianFilter
from tests.utils import skip_if_quick
from tests.utils import SkipIfBeforePyTorchVersion, skip_if_quick

TEST_CASES = [[{"type": "erf", "gt": 2.0}], [{"type": "scalespace", "gt": 3.0}], [{"type": "sampled", "gt": 5.0}]]
TEST_CASES_GPU = [
Expand Down Expand Up @@ -85,6 +85,7 @@ def code_to_run(self, input_args):
)

@parameterized.expand(TEST_CASES + TEST_CASES_GPU + TEST_CASES_3d)
@SkipIfBeforePyTorchVersion((1, 7))
def test_train_quick(self, input_args):
self.code_to_run(input_args)

Expand Down