-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
General apply_matrix()
function
#87
General apply_matrix()
function
#87
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work, another step forward towards consistency and performance. 👍
) > 0.5 # Due to different interpolation approaches, everything above 0.5 is assumed to be 1 (True) | ||
|
||
# Apply the transformed nan_mask | ||
transformed_dem[tr_nan_mask] = np.nan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Values from the "interpolated" DEM will be used in the resampling for resampling_order>0. Those can negatively affect values at the boundaries
There's generally two types of behaviour: this is ignored, or the NaNs are propagated for those pixels used in the interpolation.
To do this:
from scipy.ndimage.morphology import binary_dilation
tr_nan_mask = binary_dilation(tr_nan_mask,iterations=resampling_order)
But I don't know if this is the behaviour we want, really (could be only an option).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, but could you elaborate here? The warp function has a fill value of 1 (cval=1
), meaning all values outside the frame (near the border) should be 1 (True).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I'm trying to say is that when you create a filled_dem
with -9999
at the NaN values (or anything else) in order to run skimage.transform.warp
, those will be used in the interpolations with resampling_order> 0. Because of this, it will create a bias for non-NaN values that are close to NaN values (within 3 pixels for a resampling order of 3, for instance).
As a consequence, it could be a good idea to extend the NaN mask using a binary dilation with the same order than the resampling method used (like it is done when deriving slope, for instance).
Or maybe I am missing something? (I didn't use skimage much)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if the nan mask is dilated, won't it be smaller and smaller for every iteration? So:
while True:
dem = apply_matrix(dem, transform, empty_matrix)
Would eventually mean that dem
is empty, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's the behaviour! GDAL behaves similarly. If you resample a raster with resampling_order>0 interpolation 10 times, the NaNs will propagate into bigger and bigger data gaps and eventually you will be left with nothing.
It should be only an option for sure, but someone might want to avoid using data at the edges after resampling?
I'm still not sure I get 100% how skimage deals with the nodata here: are the filled values of "-9999" actually used in the resampling and then masked later? This would create strong edge effects by including the -9999 in the resampling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, definitely something worth checking and testing. Have you tried running the function on a DEM with gaps to see how it behaves?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a dilate_mask
argument in 7095024, @rhugonnet.
I also changed the temporary nodata value to be the nanmedian (instead of -9999), which I guess should reduce the risk of extreme outliers.
The tests include nans, @adehecq, so that does not seem to be a problem.
return nans | ||
|
||
|
||
def apply_matrix(dem: np.ndarray, transform: rio.transform.Affine, matrix: np.ndarray, invert: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great work, yet I keep thinking there must be a way to apply all of this simultaneously (and that would probably be more efficient computationally). There's nothing like this in pytransform3d?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I am certain there may be a simpler way mathematically:
"3D rigid transform" -> "2D projective transform" + "pixel value correction"
I'm just not that good at math, unfortunately!
…eck if the Coreg class can be represented as a rigid transform. Made _apply_func, _apply_pts_func and _to_matrix_func optional under the right conditions.
…e class can do more.
I made multiple additions. First, the Coreg class can be created from a matrix or translation: known_x_offset = 4.9 # m
matrix = np.diag(np.ones(4))
matrix[0, 3] = known_x_offset
coreg = xdem.coreg.Coreg.from_matrix(matrix)
coreg.apply(dem, transform=transform) or: coreg = xdem.coreg.Coreg.from_translation(x_off=4.9) # This creates a 4x4 matrix like the example above.
coreg.apply(dem, transform=transform) This may come in handy in future Coreg subclassing simplificationSecond, the new The same default behaviour is made for Finally, if a matrix exists in What this all means is that Coreg subclassing is now really easy. For example, the |
…ramping which is used in GlacioHack#87
Really neat. When an implementation of code structure means less code, less duplication, and same results, I guess it means that it's the right one 😉 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All good! Seeing you moving forward so much, I can't wait to get back to xdem! Everything else has been so time-consuming recently...
We can just open a "suggestion" issues for my unresolved comments: 1/ NaN dilation issue + 2/ Apply rigid transform in one go.
On 2/, there is no such method provided in point cloud libraries for rigid transform? This seems surprising to me... or maybe I missed part of the problem! I'm not too familiar with opencv and such.
I was looking around a bit and stumbled upon this: https://github.com/nghiaho12/rigid_transform_3D
Would this be useful?
Thanks for the suggestions, @rhugonnet! I can definitely add 1/ as an issue. Regarding 2/ (and your suggested package), the problem is that 3D gridding takes a lot of time, so this |
# Check if the matrix only contains a Z correction. In that case, only shift the DEM values by the bias. | ||
empty_matrix = np.diag(np.ones(4, float)) | ||
matrix_diff = matrix - empty_matrix | ||
if abs(matrix_diff[matrix_diff != matrix_diff[2, 3]].mean()) < 1e-4: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would that not cause a problem if the matrix has other elements that are equal to element [2, 3]?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, should it not be strictly equal to 0? The 1e-4 value is somewhat arbitrary here...
In the worse case, we're just doing extra work for nothing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to find a syntax for "slice all values except for this one" but I couldn't find out how to!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not the simplest way, but you could (make a copy,) set the unwanted value to 0 and calculate the mean.
xdem/coreg.py
Outdated
centroid: Optional[tuple[float, float, float]] = None, | ||
resampling: Union[int, str] = "cubic") -> np.ndarray: | ||
""" | ||
Apply a transformation matrix to a DEM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you maybe explain here the different steps to get the result? It is definitely non trivial, and probably not very standard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to explain in 0c9d568. Please let me know if you have any suggestions for improvement!
Looks very promising! Some parts are still a bit of black magic, but hopefully we can improve it later! |
…trix as well. Marked a spstats test as ignored since it kept randomly failing
…expected errors appropriately.
…dian to reduce the risk of weird outliers.
… middle of the raster instead of lower left.
Thanks for your feedback, @adehecq and @rhugonnet! The function is definitely improved thanks to your suggestions. Let me know what you think of its structure whenever you have time! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've forgotten the details of it now that time passed, but I checked the commits after my last comments and it seems reasonable. I think we can accept as it is for now, and if issues arise, we'll handle it later.
|
||
diff = np.asarray(self.ref.data.squeeze() - unrotated_dem) | ||
|
||
if False: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't there a way to have an option to include/exclude matplotlib commands from pytest?
A quick search lead to this, I don't know if this is the best solution: https://pypi.org/project/pytest-plt/
# Check if the matrix only contains a Z correction. In that case, only shift the DEM values by the bias. | ||
empty_matrix = np.diag(np.ones(4, float)) | ||
matrix_diff = matrix - empty_matrix | ||
if abs(matrix_diff[matrix_diff != matrix_diff[2, 3]].mean()) < 1e-4: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not the simplest way, but you could (make a copy,) set the unwanted value to 0 and calculate the mean.
Brought up in #82 and other issues, we need a general-purpose function for applying transformation matrices to DEMs.
Currently, this is not done consistently. The
NuthKaab
usesscipy.interpolate.RectBivariateSpline()
(works well but only horizontally) andICP
converts the DEM into a point cloud, transforms it, and re-grids it withscipy.interpolate.griddata()
(works but often introduces weird artefacts and is terribly slow).This function aims to standardize the way DEMs are transformed. It works in a few steps:
It sounds complex, but it works really well! It is hundreds of times faster than the
scipy.interpolate.griddata
approach and it works with any rigid transform.It opens up for many new improvements:
CoregPipeline.apply()
and.fit()
performance by merging matrices #79 can be implemented (matrices can be merged and then a transformation is only made once)BiasCorr + ICP + NuthKaab
currently does not actually improve the results compared to onlyNuthKaab
because of theICP.apply()
call. This new function should fix that.Coreg
class can be simplified slightly, since every transformation that can be explained by a rigid transform could automatically use theapply_matrix()
function.Coreg
objects could be created from aCoreg.from_matrix()
constructor method or similar. This may be sought if a transformation matrix is already known and the user wants to apply it to a DEM, e.g.:Another possible constructor method could be:
Before this is merged:
Coreg.apply()
methods should use this function