Skip to content

Commit

Permalink
3020 Enhance what's new for transfomer networks (#3019)
Browse files Browse the repository at this point in the history
* [DLMED] add more doc

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] edit change log

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] add command to get transform backend

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] enhance the doc

Signed-off-by: Nic Ma <nma@nvidia.com>
  • Loading branch information
Nic-Ma authored Sep 24, 2021
1 parent aa4eb5d commit a9cd2d8
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
* Deprecated input argument `dimensions` and `ndims`, in favor of `spatial_dims`
* Updated the Sphinx-based documentation theme for better readability
* `NdarrayTensor` type is replaced by `NdarrayOrTensor` for simpler annotations
* Attention-based network blocks now support both 2D and 3D inputs
* Self-attention-based network blocks now support both 2D and 3D inputs

### Removed
* The deprecated `TransformInverter`, in favor of `monai.transforms.InvertD`
Expand Down
2 changes: 1 addition & 1 deletion docs/source/highlights.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ transformations. These currently include, for example:


### 3. Transforms support both NumPy array and PyTorch Tensor (CPU or GPU accelerated)
From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `numpy array` and `Tensor` data.
From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `NumPy array` and `Tensor` as input types and computational backends. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`.

To accelerate the transforms, a common approach is to leverage GPU parallel-computation. Users can first convert input data into GPU Tensor by `ToTensor` or `EnsureType` transform, then the following transforms can execute on GPU based on PyTorch `Tensor` APIs.
GPU transform tutorial is available at [Spleen fast training tutorial](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_training_tutorial.ipynb).
Expand Down
2 changes: 1 addition & 1 deletion docs/source/whatsnew_0_7.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ more](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_t

MONAI starts to roll out major usability enhancements for the
`monai.transforms` module. Many transforms are now supporting both NumPy and
PyTorch, as input types and computational backends.
PyTorch, as input types and computational backends. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`.

One benefit of these enhancements is that the users can now better leverage the
GPUs for preprocessing. By transferring the input data onto GPU using
Expand Down

0 comments on commit a9cd2d8

Please sign in to comment.