You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
ndvi is giving incorrect results when input arrays are unsigned integer due to overflow in subtraction.
Expected behavior
Consider the following
import numpy as np
import xarray as xr
import xrspatial
a = xr.DataArray(np.array([[1,1,1],[1,1,1]], dtype='uint16'))
b = xr.DataArray(np.array([[0,1,2],[0,1,2]], dtype='uint16'))
xrspatial.ndvi(a,b)
Values in third column are due to overflow of unsigned integer during subtraction, see numpy/numpy#21237.
Using the above data, this can be illustrated by
A solution could be to call np.subtract directly in e.g. _normalized_ratio_cpu, though that may not be robust to all cases. I think there are some alternatives in the numpy issues linked above
Describe the bug
ndvi is giving incorrect results when input arrays are unsigned integer due to overflow in subtraction.
Expected behavior
Consider the following
Results:
Values in third column are due to overflow of unsigned integer during subtraction, see numpy/numpy#21237.
Using the above data, this can be illustrated by
A solution could be to call np.subtract directly in e.g.
_normalized_ratio_cpu
, though that may not be robust to all cases. I think there are some alternatives in the numpy issues linked aboveThe text was updated successfully, but these errors were encountered: