-
Notifications
You must be signed in to change notification settings - Fork 932
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug Report] Incorrect results of math_utils
due to torch.jit.script
#770
Comments
Very interesting and surprising bug. I realize the above implementation was rather incorrect since it would map -PI to PI but -3PI to -PI. Following the general convention of this function (based on MATLAB), all odd positive multiples of PI should map to PI, and all negative multiples should map to -PI. import torch
import matplotlib.pyplot as plt
# def wrap_to_pi(angles: torch.Tensor) -> torch.Tensor:
# angles = angles.clone()
# angles %= 2 * torch.pi
# angles -= 2 * torch.pi * (angles > torch.pi)
# return angles
def wrap_to_pi(angles: torch.Tensor) -> torch.Tensor:
# wrap to [0, 2*pi)
wrapped_angle = (angles + torch.pi) % (2 * torch.pi)
# map to [-pi, pi]
# we check for zero in wrapped angle to make it go to pi when input angle is odd multiple of pi
return torch.where((wrapped_angle == 0) & (angles > 0), torch.pi, wrapped_angle - torch.pi)
wrap_to_pi_jit = torch.jit.script(wrap_to_pi)
wrap_to_pi_compile = torch.compile(wrap_to_pi)
with torch.device('cuda'):
N = 10000
a = torch.zeros(N).uniform_(-torch.pi, torch.pi)
b = torch.zeros(N).uniform_(-torch.pi, torch.pi)
diff = a - b
diff = wrap_to_pi(a - b)
diff_jit = wrap_to_pi_jit(a - b)
diff_jit = wrap_to_pi_jit(a - b)
diff_compile = wrap_to_pi_compile(a - b)
plt.hist(diff.cpu(), alpha=0.5, label='original')
plt.hist(diff_jit.cpu(), alpha=0.5, label='jit')
plt.hist(diff_compile.cpu(), alpha=0.5, label='compiled')
plt.legend()
plt.show()
##
# Unit tests
##
PI = torch.pi
test_cases = [
# No wrapping needed
(torch.Tensor([0.0]), torch.Tensor([0.0])),
# Positive angle
(torch.Tensor([PI]), torch.Tensor([PI])),
# Negative angle
(torch.Tensor([-PI]), torch.Tensor([-PI])),
# Multiple angles
(torch.Tensor([3 * PI, -3 * PI, 4 * PI, -4 * PI]), torch.Tensor([PI, -PI, 0.0, 0.0])),
# Multiple angles from MATLAB docs
# fmt: off
(
torch.Tensor([-2 * PI, - PI - 0.1, -PI, -2.8, 3.1, PI, PI + 0.001, PI + 1, 2 * PI, 2 * PI + 0.1]),
torch.Tensor([0.0, PI - 0.1, -PI, -2.8, 3.1 , PI, -PI + 0.001, -PI + 1 , 0.0, 0.1])
),
# fmt: on
]
for a, expected in test_cases:
result = wrap_to_pi(a)
assert torch.allclose(result, expected, atol=1e-6), f"[NORMAL] Expected {expected}, got {result}"
result = wrap_to_pi_jit(a)
assert torch.allclose(result, expected, atol=1e-6), f"[JIT] Expected {expected}, got {result}"
result = wrap_to_pi_compile(a)
assert torch.allclose(result, expected, atol=1e-6), f"[COMPILE] Expected {expected}, got {result}" Here's what I get: |
There seems to be an issue with the JIT function and mod operation. If we change the implementation to this, the plots overlap (though the bug in my previous comment still holds). def wrap_to_pi(angles: torch.Tensor) -> torch.Tensor:
"""Wraps input angles (in radians) to the range [-pi, pi].
Args:
angles: Input angles of any shape.
Returns:
Angles in the range [-pi, pi].
"""
angles = angles.clone()
angles = angles % (2 * torch.pi)
angles -= 2 * torch.pi * (angles > torch.pi)
return angles It would be interesting to report this to the PyTorch team to get some better understanding. |
# Description The previous implementation of `wrap_to_pi` was rather incorrect since it would map -PI to PI but -3PI to -PI. Following the general convention of this function (based on MATLAB), all odd positive multiples of PI should map to PI, and all negative multiples should map to -PI. This MR fixes the function and also adds a unit test for it. Fixes isaac-sim#770 ## Type of change - Bug fix (non-breaking change which fixes an issue) - Breaking change (fix or feature that would cause existing functionality to not work as expected) ## Checklist - [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with `./isaaclab.sh --format` - [x] I have made corresponding changes to the documentation - [x] My changes generate no new warnings - [x] I have added tests that prove my fix is effective or that my feature works - [x] I have updated the changelog and the corresponding version in the extension's `config/extension.toml` file - [x] I have added my name to the `CONTRIBUTORS.md` or my name already exists there
Describe the bug
We surprisingly found
math_utils.wrap_to_pi
to produce inconsistent values sometimes when decorated with@torch.script.jit
. See below. Thanks to @EGalahad.Steps to reproduce
gives
System Info
torch version: 2.3.1
Additional context
The issue seems to be related to how
jit
handles clone and in-place operations. Dropping the in-place operations yields the correct results. Glad to also test the others.Checklist
The text was updated successfully, but these errors were encountered: