-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add scaled_boundingbox function #28
Conversation
I assume you would want this because you are comparing it to the scaled (raw) Int32 point coordinates as they appear in the LAS file, right? To me it would make more sense to then also have this function return Int32 coordinates, or at least Int. I see that in PointCloudRasterizers.jl this was not done, but perhaps we should. What do you you think @evetion? |
Yep, I agree it should be I still dislike the ambiguity of
Well, we should document it carefully. Looking at PointcloudRasterizers, it can use some docs as well. Also had plans to generalize it, so it could work on |
OK, is there a smarter way to do that than wrapping everything in scaled_boundingbox(h::LasHeader) = (xmin = Int32((h.x_min - h.x_offset) / h.x_scale), ymin = Int32((h.y_min - h.y_offset) / h.y_scale), zmin = Int32((h.z_min - h.z_offset) / h.z_scale), xmax = Int32((h.x_max - h.x_offset) / h.x_scale), ymax = Int32((h.y_max - h.y_offset) / h.y_scale), zmax = Int32((h.z_max - h.z_offset) / h.z_scale)) I originally grabbed the code and naming from |
Hmm I'm glad I'm not the only one being confused by I see the term
I went to laspy to see what language they use, but on https://laspy.readthedocs.io/en/latest/tut_background.html, the first blue box is actually interesting:
I've ever only seen So perhaps avoiding using @Crghilardi not really a smarter way, but this is probably a bit more robust against floating point issues: function scaled_boundingbox(h::LasHeader)
(
xmin = round(Int32, (h.x_min - h.x_offset) / h.x_scale),
ymin = round(Int32, (h.y_min - h.y_offset) / h.y_scale),
zmin = round(Int32, (h.z_min - h.z_offset) / h.z_scale),
xmax = round(Int32, (h.x_max - h.x_offset) / h.x_scale),
ymax = round(Int32, (h.y_max - h.y_offset) / h.y_scale),
zmax = round(Int32, (h.z_max - h.z_offset) / h.z_scale)
)
end |
👍 interesting to read. I am just replicating what I see used in EDIT: OK, it makes more sense after dinner. I was thinking of one thing but saying the opposite. Also, my experience is the same as yours, where I have only seen Does it make more sense to not have this function at all and re-work the |
Ha ok, I asked for clarification on the use of
I think so. Algorithms can of course use stored integer coordinates internally if that makes a significant performance difference. But generally it's probably best to expose real coordinates to most users. The stored integers are an implementation detail of the LAS specification, not a general feature of point clouds. |
OK cool, that's a good idea to confirm it at the source! What (if anything) needs to be updated to move toward the "only expose real world coords to users" idea? I see I also saw your comment on the slack channel (copy pasted below):
Are there any known issues with calculating on the float values compared to the integers at this point? |
This functionality is in essence an AffineMap, which could be interesting for speed and readability? using CoordinateTransformations
using LinearAlgebra
f = LinearMap(Diagonal([x_scale_factor, y_scale_factor, z_scale_factor])) ∘ Translation(x_offset, y_offset, z_offset)
unscaled_point = f(scaled_point)
scaled_point = inv(f)(unscaled_point) |
Yeah I think users should definitely have access to both the integer and float coordinates, but the floats will be less surprising to novice users and should perhaps be presented first. But those that know how the scaling works should be free to use them as well. Reason I asked on Slack was because it would be great if we could get apparent real world coordinates with the same precision and footprint (Int32 is half of Float64). In the common case that the offset is 0 and scale is 0.001 (i.e. mm), we could already use The Lines 212 to 214 in a8625c5
You have to be a little bit more careful about floating point accuracy. (and for integers, overflow). Also, some algorithms only work well on integer data, like radix sort. Some geometrical operations can also be faster if you know that your data effectively lies on a fixed grid. |
That sounds like a good plan. Where does that leave this PR and
That does sound great! Do you have a sense of if that is possible yet? |
Since this may be a useful function in some cases, why not just merge it, but not export it? See 0e82f07. It would be nicer to merge this in the
Not sure, haven't critically looked at it yet. But best to discuss in that repo if something needs to be done.
Nope :) I mean it should be possible, but we'd need to do a bunch of benchmarking to see how it compares. Perhaps it's only viable when tagged on a special array type, such that you essentially have a struct with a |
OK 👍 |
this adds a
scaled_boundingbox
function as discussed here.