Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expands observation term scaling to support list of floats #1269

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion source/extensions/omni.isaac.lab/config/extension.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[package]

# Note: Semantic Versioning is used: https://semver.org/
version = "0.25.2"
version = "0.25.3"

# Description
title = "Isaac Lab framework for Robot Learning"
Expand Down
11 changes: 11 additions & 0 deletions source/extensions/omni.isaac.lab/docs/CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
Changelog
---------


0.25.3 (2024-10-18)
~~~~~~~~~~~~~~~~~~~~

Added
^^^^^

* Added support to define tuple of floats to scale observation terms by expanding the
:attr:`omni.isaac.lab.managers.manager_term_cfg.ObservationManagerCfg.scale` attribute.


0.25.2 (2024-10-16)
~~~~~~~~~~~~~~~~~~~~

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ class ObservationTermCfg(ManagerTermBaseCfg):
"""The clipping range for the observation after adding noise. Defaults to None,
in which case no clipping is applied."""

scale: float | None = None
scale: tuple[float, ...] | float | None = None
"""The scale to apply to the observation after clipping. Defaults to None,
in which case no scaling is applied (same as setting scale to :obj:`1`)."""

pascal-roth marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ def compute_group(self, group_name: str) -> torch.Tensor | dict[str, torch.Tenso
obs = term_cfg.noise.func(obs, term_cfg.noise)
if term_cfg.clip:
obs = obs.clip_(min=term_cfg.clip[0], max=term_cfg.clip[1])
if term_cfg.scale:
if term_cfg.scale is not None:
pascal-roth marked this conversation as resolved.
Show resolved Hide resolved
obs = obs.mul_(term_cfg.scale)
# add value to list
group_obs[name] = obs
Expand Down Expand Up @@ -337,6 +337,23 @@ def _prepare_terms(self):
obs_dims = tuple(term_cfg.func(self._env, **term_cfg.params).shape)
self._group_obs_term_dim[group_name].append(obs_dims[1:])

# if scale is set, check if single float or tuple
if term_cfg.scale is not None:
if not isinstance(term_cfg.scale, (float, int, tuple)):
raise TypeError(
f"Scale for observation term '{term_name}' in group '{group_name}'"
f" is not of type float, int or tuple. Received: '{type(term_cfg.scale)}'."
)
if isinstance(term_cfg.scale, tuple) and len(term_cfg.scale) != obs_dims[1]:
raise ValueError(
f"Scale for observation term '{term_name}' in group '{group_name}'"
f" does not match the dimensions of the observation. Expected: {obs_dims[1] - 1}"
pascal-roth marked this conversation as resolved.
Show resolved Hide resolved
f" but received: {len(term_cfg.scale)}."
)

if isinstance(term_cfg.scale, tuple):
pascal-roth marked this conversation as resolved.
Show resolved Hide resolved
term_cfg.scale = torch.tensor(term_cfg.scale, device=self._env.device)
pascal-roth marked this conversation as resolved.
Show resolved Hide resolved

# prepare modifiers for each observation
if term_cfg.modifiers is not None:
# initialize list of modifiers for term
Expand Down
Loading