Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3020 Enhance what's new for transfomer networks #3019

Merged
merged 10 commits into from
Sep 24, 2021
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
* Deprecated input argument `dimensions` and `ndims`, in favor of `spatial_dims`
* Updated the Sphinx-based documentation theme for better readability
* `NdarrayTensor` type is replaced by `NdarrayOrTensor` for simpler annotations
* Attention-based network blocks now support both 2D and 3D inputs
* Self-attention-based network blocks now support both 2D and 3D inputs

### Removed
* The deprecated `TransformInverter`, in favor of `monai.transforms.InvertD`
Expand Down
2 changes: 1 addition & 1 deletion docs/source/highlights.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ transformations. These currently include, for example:


### 3. Transforms support both NumPy array and PyTorch Tensor (CPU or GPU accelerated)
From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `numpy array` and `Tensor` data.
From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `NumPy array` and `Tensor` as input types and computational backends. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`.

To accelerate the transforms, a common approach is to leverage GPU parallel-computation. Users can first convert input data into GPU Tensor by `ToTensor` or `EnsureType` transform, then the following transforms can execute on GPU based on PyTorch `Tensor` APIs.
GPU transform tutorial is available at [Spleen fast training tutorial](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_training_tutorial.ipynb).
Expand Down
2 changes: 1 addition & 1 deletion docs/source/whatsnew_0_7.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ more](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_t

MONAI starts to roll out major usability enhancements for the
`monai.transforms` module. Many transforms are now supporting both NumPy and
PyTorch, as input types and computational backends.
PyTorch, as input types and computational backends. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`.

One benefit of these enhancements is that the users can now better leverage the
GPUs for preprocessing. By transferring the input data onto GPU using
Expand Down