Skip to content

Commit

Permalink
Merge branch 'ncapens/version-1.2.0' into 'main'
Browse files Browse the repository at this point in the history
Update the release version to 1.2.0

See merge request omniverse/warp!562
  • Loading branch information
c0d1f1ed committed Jun 7, 2024
2 parents 5fcd593 + d83984f commit e8f9caf
Show file tree
Hide file tree
Showing 7 changed files with 78 additions and 8 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# CHANGELOG

## [Upcoming Release] - 2024-??-??
## [1.2.0] - 2024-06-06

- Add a not-a-number floating-point constant that can be used as `wp.NAN` or `wp.nan`.
- Add `wp.isnan()`, `wp.isinf()`, and `wp.isfinite()` for scalars, vectors, matrices, etc.
Expand Down
2 changes: 1 addition & 1 deletion VERSION.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.1.1
1.2.0
2 changes: 1 addition & 1 deletion exts/omni.warp.core/config/extension.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
# Semantic Versioning is used: https://semver.org/
version = "1.1.1"
version = "1.2.0"
authors = ["NVIDIA"]
title = "Warp Core"
description="The core Warp Python module"
Expand Down
37 changes: 36 additions & 1 deletion exts/omni.warp.core/docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,43 @@
# CHANGELOG

## [1.2.0] - 2024-06-06

- Add a not-a-number floating-point constant that can be used as `wp.NAN` or `wp.nan`.
- Add `wp.isnan()`, `wp.isinf()`, and `wp.isfinite()` for scalars, vectors, matrices, etc.
- Improve kernel cache reuse by hashing just the local module constants. Previously, a
module's hash was affected by all `wp.constant()` variables declared in a Warp program.
- Revised module compilation process to allow multiple processes to use the same kernel cache directory.
Cached kernels will now be stored in hash-specific subdirectory.
- Add runtime checks for `wp.MarchingCubes` on field dimensions and size
- Fix memory leak in `wp.Mesh` BVH ([GH-225](https://github.com/NVIDIA/warp/issues/225))
- Use C++17 when building the Warp library and user kernels
- Increase PTX target architecture up to `sm_75` (from `sm_70`), enabling Turing ISA features
- Extended NanoVDB support (see `warp.Volume`):
- Add support for data-agnostic index grids, allocation at voxel granularity
- New `wp.volume_lookup_index()`, `wp.volume_sample_index()` and generic `wp.volume_sample()`/`wp.volume_lookup()`/`wp.volume_store()` kernel-level functions
- Zero-copy aliasing of in-memory grids, support for multi-grid buffers
- Grid introspection and blind data access capabilities
- `warp.fem` can now work directly on NanoVDB grids using `warp.fem.Nanogrid`
- Fixed `wp.volume_sample_v()` and `wp.volume_store_*()` adjoints
- Prevent `wp.volume_store()` from overwriting grid background values
- Improve validation of user-provided fields and values in `warp.fem`
- Support headless rendering of `wp.render.OpenGLRenderer` via `pyglet.options["headless"] = True`
- `wp.render.RegisteredGLBuffer` can fall back to CPU-bound copying if CUDA/OpenGL interop is not available
- Clarify terms for external contributions, please see CONTRIBUTING.md for details
- Improve performance of `wp.sparse.bsr_mm()` by ~5x on benchmark problems
- Fix for XPBD incorrectly indexing into of joint actuations `joint_act` arrays
- Fix for mass matrix gradients computation in `wp.sim.FeatherstoneIntegrator()`
- Fix for handling of `--msvc_path` in build scripts
- Fix for `wp.copy()` params to record dest and src offset parameters on `wp.Tape()`
- Fix for `wp.randn()` to ensure return values are finite
- Fix for slicing of arrays with gradients in kernels
- Fix for function overload caching, ensure module is rebuilt if any function overloads are modified
- Fix for handling of `bool` types in generic kernels
- Publish CUDA 12.5 binaries for Hopper support, see https://github.com/nvidia/warp?tab=readme-ov-file#installing for details

## [1.1.1] - 2024-05-24

- Implicitly initialize Warp when first required
- `wp.init()` is no longer required to be called explicitly and will be performed on first call to the API
- Speed up `omni.warp.core`'s startup time

## [1.1.0] - 2024-05-09
Expand Down
4 changes: 2 additions & 2 deletions exts/omni.warp/config/extension.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
# Semantic Versioning is used: https://semver.org/
version = "1.1.1"
version = "1.2.0"
authors = ["NVIDIA"]
title = "Warp"
description="Warp OmniGraph Nodes and Sample Scenes"
Expand Down Expand Up @@ -35,7 +35,7 @@ exclude = ["Ogn*Database.py", "*/ogn*"]
"omni.timeline" = {}
"omni.ui" = {optional = true}
"omni.usd" = {}
"omni.warp.core" = {version = "1.1.1", exact = true}
"omni.warp.core" = {version = "1.2.0", exact = true}

[[python.module]]
name = "omni.warp._extension"
Expand Down
37 changes: 36 additions & 1 deletion exts/omni.warp/docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,43 @@
# CHANGELOG

## [1.2.0] - 2024-06-06

- Add a not-a-number floating-point constant that can be used as `wp.NAN` or `wp.nan`.
- Add `wp.isnan()`, `wp.isinf()`, and `wp.isfinite()` for scalars, vectors, matrices, etc.
- Improve kernel cache reuse by hashing just the local module constants. Previously, a
module's hash was affected by all `wp.constant()` variables declared in a Warp program.
- Revised module compilation process to allow multiple processes to use the same kernel cache directory.
Cached kernels will now be stored in hash-specific subdirectory.
- Add runtime checks for `wp.MarchingCubes` on field dimensions and size
- Fix memory leak in `wp.Mesh` BVH ([GH-225](https://github.com/NVIDIA/warp/issues/225))
- Use C++17 when building the Warp library and user kernels
- Increase PTX target architecture up to `sm_75` (from `sm_70`), enabling Turing ISA features
- Extended NanoVDB support (see `warp.Volume`):
- Add support for data-agnostic index grids, allocation at voxel granularity
- New `wp.volume_lookup_index()`, `wp.volume_sample_index()` and generic `wp.volume_sample()`/`wp.volume_lookup()`/`wp.volume_store()` kernel-level functions
- Zero-copy aliasing of in-memory grids, support for multi-grid buffers
- Grid introspection and blind data access capabilities
- `warp.fem` can now work directly on NanoVDB grids using `warp.fem.Nanogrid`
- Fixed `wp.volume_sample_v()` and `wp.volume_store_*()` adjoints
- Prevent `wp.volume_store()` from overwriting grid background values
- Improve validation of user-provided fields and values in `warp.fem`
- Support headless rendering of `wp.render.OpenGLRenderer` via `pyglet.options["headless"] = True`
- `wp.render.RegisteredGLBuffer` can fall back to CPU-bound copying if CUDA/OpenGL interop is not available
- Clarify terms for external contributions, please see CONTRIBUTING.md for details
- Improve performance of `wp.sparse.bsr_mm()` by ~5x on benchmark problems
- Fix for XPBD incorrectly indexing into of joint actuations `joint_act` arrays
- Fix for mass matrix gradients computation in `wp.sim.FeatherstoneIntegrator()`
- Fix for handling of `--msvc_path` in build scripts
- Fix for `wp.copy()` params to record dest and src offset parameters on `wp.Tape()`
- Fix for `wp.randn()` to ensure return values are finite
- Fix for slicing of arrays with gradients in kernels
- Fix for function overload caching, ensure module is rebuilt if any function overloads are modified
- Fix for handling of `bool` types in generic kernels
- Publish CUDA 12.5 binaries for Hopper support, see https://github.com/nvidia/warp?tab=readme-ov-file#installing for details

## [1.1.1] - 2024-05-24

- Implicitly initialize Warp when first required
- `wp.init()` is no longer required to be called explicitly and will be performed on first call to the API
- Speed up `omni.warp.core`'s startup time

## [1.1.0] - 2024-05-09
Expand Down
2 changes: 1 addition & 1 deletion warp/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

from typing import Optional

version: str = "1.1.1"
version: str = "1.2.0"

verify_fp: bool = False # verify inputs and outputs are finite after each launch
verify_cuda: bool = False # if true will check CUDA errors after each kernel launch / memory operation
Expand Down

0 comments on commit e8f9caf

Please sign in to comment.