Skip to content

Commit

Permalink
Merge branch 'master' into devel/classification
Browse files Browse the repository at this point in the history
  • Loading branch information
SkafteNicki committed Aug 15, 2022
2 parents e93d66e + 13f19e5 commit d9e2030
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci_install-pkg.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, macOS-11 , windows-2019]
os: [ubuntu-20.04, macOS-11 , windows-2022]
python-version: [3.8]
timeout-minutes: 10

Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/ci_integrate.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,14 +24,14 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, macOS-11, windows-2019]
os: [ubuntu-20.04, macOS-11, windows-2022]
python-version: ['3.7', '3.10']
requires: ['oldest', 'latest']
exclude:
- {python-version: '3.7', requires: 'latest'}
- {python-version: '3.9', requires: 'oldest'}
- {python-version: '3.10', requires: 'oldest'}
- {python-version: '3.10', os: 'windows-2019'} # todo: https://discuss.pytorch.org/t/numpy-is-not-available-error/146192
- {python-version: '3.10', os: 'windows-2022'} # todo: https://discuss.pytorch.org/t/numpy-is-not-available-error/146192
env:
PYTEST_ARTEFACT: test-results-${{ matrix.os }}-py${{ matrix.python-version }}-${{ matrix.requires }}.xml
PYTORCH_URL: https://download.pytorch.org/whl/cpu/torch_stable.html
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/ci_test-full.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,13 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, macOS-11, windows-2019]
os: [ubuntu-20.04, macOS-11, windows-2022]
python-version: ['3.7', '3.8', '3.10']
requires: ['oldest', 'latest']
exclude:
- {python-version: '3.7', requires: 'latest'}
- {python-version: '3.10', requires: 'oldest'}
- {python-version: '3.10', os: 'windows-2019'} # todo: https://discuss.pytorch.org/t/numpy-is-not-available-error/146192
- {python-version: '3.10', os: 'windows-2022'} # todo: https://discuss.pytorch.org/t/numpy-is-not-available-error/146192
env:
PYTEST_ARTEFACT: test-results-${{ matrix.os }}-py${{ matrix.python-version }}-${{ matrix.requires }}.xml
PYTORCH_URL: https://download.pytorch.org/whl/cpu/torch_stable.html
Expand Down
8 changes: 4 additions & 4 deletions docs/source/pages/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -393,13 +393,13 @@ If you are running in a distributed environment, TorchMetrics will automatically
synchronization for you. However, the following three keyword arguments can be given to any metric class for
further control over the distributed aggregation:

- ``dist_sync_on_step``: This argument is ``bool`` that indicates if the metric should syncronize between
- ``dist_sync_on_step``: This argument is ``bool`` that indicates if the metric should synchronize between
different devices every time ``forward`` is called. Setting this to ``True`` is in general not recommended
as syncronization is an expensive operation to do after each batch.
as synchronization is an expensive operation to do after each batch.

- ``process_group``: By default we syncronize across the *world* i.e. all proceses being computed on. You
- ``process_group``: By default we synchronize across the *world* i.e. all processes being computed on. You
can provide an ``torch._C._distributed_c10d.ProcessGroup`` in this argument to specify exactly what
devices should be syncronized over.
devices should be synchronized over.

- ``dist_sync_fn``: By default we use :func:`torch.distributed.all_gather` to perform the synchronization between
devices. Provide another callable function for this argument to perform custom distributed synchronization.

0 comments on commit d9e2030

Please sign in to comment.