Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

General apply_matrix() function #87

Merged
merged 23 commits into from
Apr 30, 2021

Conversation

erikmannerfelt
Copy link
Contributor

Brought up in #82 and other issues, we need a general-purpose function for applying transformation matrices to DEMs.

Currently, this is not done consistently. The NuthKaab uses scipy.interpolate.RectBivariateSpline() (works well but only horizontally) and ICP converts the DEM into a point cloud, transforms it, and re-grids it with scipy.interpolate.griddata() (works but often introduces weird artefacts and is terribly slow).

This function aims to standardize the way DEMs are transformed. It works in a few steps:

  1. Convert the DEM into a PC (not for gridding, only analysis)
  2. Transform the PC using the matrix
  3. Estimate and apply a deramp from the elevation difference of each point in the PC vs. transformed PC
  4. Convert the horizontal transformed coordinates to index coordinates (indices specify where the pixels should be displaced)
  5. Transform the DEM using the new indices.
  6. Transform the nodata mask using the same warping.

It sounds complex, but it works really well! It is hundreds of times faster than the scipy.interpolate.griddata approach and it works with any rigid transform.

It opens up for many new improvements:

  1. The pipeline optimization suggested in Improve CoregPipeline.apply() and .fit() performance by merging matrices #79 can be implemented (matrices can be merged and then a transformation is only made once)
  2. As found by @adehecq, my "ultimate" BiasCorr + ICP + NuthKaab currently does not actually improve the results compared to only NuthKaab because of the ICP.apply() call. This new function should fix that.
  3. The Coreg class can be simplified slightly, since every transformation that can be explained by a rigid transform could automatically use the apply_matrix() function.
  4. Generic Coreg objects could be created from a Coreg.from_matrix() constructor method or similar. This may be sought if a transformation matrix is already known and the user wants to apply it to a DEM, e.g.:
known_x_offset = 4.9  # m
matrix = np.diag(np.ones(4))
matrix[0, 3] = known_x_offset

coreg = xdem.coreg.Coreg.from_matrix(matrix)

coreg.apply(dem, transform=transform)

Another possible constructor method could be:

coreg = xdem.coreg.Coreg.from_translation(x=4.9)  # This creates a 4x4 matrix like the example above.
coreg.apply(dem, transform=transform)

Before this is merged:

  1. The Coreg.apply() methods should use this function
  2. More tests are needed. Right now, I've tried with translations, biases and rotations, but not all at the same time for example.

rhugonnet
rhugonnet previously approved these changes Apr 14, 2021
Copy link
Member

@rhugonnet rhugonnet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work, another step forward towards consistency and performance. 👍

tests/test_coreg.py Outdated Show resolved Hide resolved
tests/test_coreg.py Outdated Show resolved Hide resolved
tests/test_coreg.py Outdated Show resolved Hide resolved
tests/test_coreg.py Outdated Show resolved Hide resolved
tests/test_coreg.py Show resolved Hide resolved
) > 0.5 # Due to different interpolation approaches, everything above 0.5 is assumed to be 1 (True)

# Apply the transformed nan_mask
transformed_dem[tr_nan_mask] = np.nan
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Values from the "interpolated" DEM will be used in the resampling for resampling_order>0. Those can negatively affect values at the boundaries
There's generally two types of behaviour: this is ignored, or the NaNs are propagated for those pixels used in the interpolation.
To do this:

from scipy.ndimage.morphology import binary_dilation

tr_nan_mask = binary_dilation(tr_nan_mask,iterations=resampling_order)

But I don't know if this is the behaviour we want, really (could be only an option).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, but could you elaborate here? The warp function has a fill value of 1 (cval=1), meaning all values outside the frame (near the border) should be 1 (True).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I'm trying to say is that when you create a filled_dem with -9999 at the NaN values (or anything else) in order to run skimage.transform.warp, those will be used in the interpolations with resampling_order> 0. Because of this, it will create a bias for non-NaN values that are close to NaN values (within 3 pixels for a resampling order of 3, for instance).
As a consequence, it could be a good idea to extend the NaN mask using a binary dilation with the same order than the resampling method used (like it is done when deriving slope, for instance).
Or maybe I am missing something? (I didn't use skimage much)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But if the nan mask is dilated, won't it be smaller and smaller for every iteration? So:

while True:
    dem = apply_matrix(dem, transform, empty_matrix)

Would eventually mean that dem is empty, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's the behaviour! GDAL behaves similarly. If you resample a raster with resampling_order>0 interpolation 10 times, the NaNs will propagate into bigger and bigger data gaps and eventually you will be left with nothing.
It should be only an option for sure, but someone might want to avoid using data at the edges after resampling?

I'm still not sure I get 100% how skimage deals with the nodata here: are the filled values of "-9999" actually used in the resampling and then masked later? This would create strong edge effects by including the -9999 in the resampling.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, definitely something worth checking and testing. Have you tried running the function on a DEM with gaps to see how it behaves?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a dilate_mask argument in 7095024, @rhugonnet.

I also changed the temporary nodata value to be the nanmedian (instead of -9999), which I guess should reduce the risk of extreme outliers.

The tests include nans, @adehecq, so that does not seem to be a problem.

return nans


def apply_matrix(dem: np.ndarray, transform: rio.transform.Affine, matrix: np.ndarray, invert: bool = False,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great work, yet I keep thinking there must be a way to apply all of this simultaneously (and that would probably be more efficient computationally). There's nothing like this in pytransform3d?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I am certain there may be a simpler way mathematically:

"3D rigid transform" -> "2D projective transform" + "pixel value correction"

I'm just not that good at math, unfortunately!

@erikmannerfelt erikmannerfelt marked this pull request as ready for review April 19, 2021 14:09
@erikmannerfelt
Copy link
Contributor Author

I made multiple additions. First, the Coreg class can be created from a matrix or translation:

known_x_offset = 4.9  # m
matrix = np.diag(np.ones(4))
matrix[0, 3] = known_x_offset

coreg = xdem.coreg.Coreg.from_matrix(matrix)

coreg.apply(dem, transform=transform)

or:

coreg = xdem.coreg.Coreg.from_translation(x_off=4.9)  # This creates a 4x4 matrix like the example above.
coreg.apply(dem, transform=transform)

This may come in handy in future .save() and .load() functions (in case coregistration takes time and one wants to store the transform for later).

Coreg subclassing simplification

Second, the new apply_matrix() function is now used by default in the .apply() method. If ._apply_func() has been implemented, it will use this instead.

The same default behaviour is made for apply_pts() method which will try to apply the coregistration as a rigid transform if ._apply_pts_func() has not been defined (which overrides the default behaviour).

Finally, if a matrix exists in self._meta["matrix"], the method ._to_matrix_func() is not needed, and .to_matrix() will just find the dictionary version instead.

What this all means is that Coreg subclassing is now really easy.

For example, the ICP subclass only overrides the .__init__() and ._fit_func() methods, and everything else is inherited! Similarly, NuthKaab only implements three methods: .__init__(), ._fit_func() and _to_matrix_func(). This improvement will hopefully make future extensions much easier, and will probably make maintaining everything trivial. Finally, it's less code so that's great.

erikmannerfelt added a commit to erikmannerfelt/xdem that referenced this pull request Apr 19, 2021
@rhugonnet
Copy link
Member

What this all means is that Coreg subclassing is now really easy.

Really neat. When an implementation of code structure means less code, less duplication, and same results, I guess it means that it's the right one 😉
Good job!

rhugonnet
rhugonnet previously approved these changes Apr 20, 2021
Copy link
Member

@rhugonnet rhugonnet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All good! Seeing you moving forward so much, I can't wait to get back to xdem! Everything else has been so time-consuming recently...

We can just open a "suggestion" issues for my unresolved comments: 1/ NaN dilation issue + 2/ Apply rigid transform in one go.
On 2/, there is no such method provided in point cloud libraries for rigid transform? This seems surprising to me... or maybe I missed part of the problem! I'm not too familiar with opencv and such.
I was looking around a bit and stumbled upon this: https://github.com/nghiaho12/rigid_transform_3D
Would this be useful?

xdem/coreg.py Show resolved Hide resolved
xdem/coreg.py Show resolved Hide resolved
@erikmannerfelt
Copy link
Contributor Author

All good! Seeing you moving forward so much, I can't wait to get back to xdem! Everything else has been so time-consuming recently...

We can just open a "suggestion" issues for my unresolved comments: 1/ NaN dilation issue + 2/ Apply rigid transform in one go.
On 2/, there is no such method provided in point cloud libraries for rigid transform? This seems surprising to me... or maybe I missed part of the problem! I'm not too familiar with opencv and such.
I was looking around a bit and stumbled upon this: https://github.com/nghiaho12/rigid_transform_3D
Would this be useful?

Thanks for the suggestions, @rhugonnet! I can definitely add 1/ as an issue. Regarding 2/ (and your suggested package), the problem is that 3D gridding takes a lot of time, so this apply_matrix() only does 2D gridding and then a value correction. The script you link needs a re-gridding afterward.

xdem/coreg.py Outdated Show resolved Hide resolved
# Check if the matrix only contains a Z correction. In that case, only shift the DEM values by the bias.
empty_matrix = np.diag(np.ones(4, float))
matrix_diff = matrix - empty_matrix
if abs(matrix_diff[matrix_diff != matrix_diff[2, 3]].mean()) < 1e-4:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would that not cause a problem if the matrix has other elements that are equal to element [2, 3]?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, should it not be strictly equal to 0? The 1e-4 value is somewhat arbitrary here...
In the worse case, we're just doing extra work for nothing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to find a syntax for "slice all values except for this one" but I couldn't find out how to!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not the simplest way, but you could (make a copy,) set the unwanted value to 0 and calculate the mean.

xdem/coreg.py Outdated Show resolved Hide resolved
xdem/coreg.py Outdated
centroid: Optional[tuple[float, float, float]] = None,
resampling: Union[int, str] = "cubic") -> np.ndarray:
"""
Apply a transformation matrix to a DEM.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you maybe explain here the different steps to get the result? It is definitely non trivial, and probably not very standard.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to explain in 0c9d568. Please let me know if you have any suggestions for improvement!

@adehecq
Copy link
Member

adehecq commented Apr 20, 2021

Looks very promising! Some parts are still a bit of black magic, but hopefully we can improve it later!
Some minor suggestions in the code, but the overall structure seem good!

…trix as well. Marked a spstats test as ignored since it kept randomly failing
@erikmannerfelt
Copy link
Contributor Author

Thanks for your feedback, @adehecq and @rhugonnet! The function is definitely improved thanks to your suggestions. Let me know what you think of its structure whenever you have time!

Copy link
Member

@adehecq adehecq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've forgotten the details of it now that time passed, but I checked the commits after my last comments and it seems reasonable. I think we can accept as it is for now, and if issues arise, we'll handle it later.


diff = np.asarray(self.ref.data.squeeze() - unrotated_dem)

if False:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't there a way to have an option to include/exclude matplotlib commands from pytest?
A quick search lead to this, I don't know if this is the best solution: https://pypi.org/project/pytest-plt/

xdem/coreg.py Show resolved Hide resolved
# Check if the matrix only contains a Z correction. In that case, only shift the DEM values by the bias.
empty_matrix = np.diag(np.ones(4, float))
matrix_diff = matrix - empty_matrix
if abs(matrix_diff[matrix_diff != matrix_diff[2, 3]].mean()) < 1e-4:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not the simplest way, but you could (make a copy,) set the unwanted value to 0 and calculate the mean.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants