-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reparameterize section #220
Conversation
We fix this by introducing splitAtParam' which results in the split halves as well as mappings from the original parameter space, to the space of the split SegTrees. This mapping can be used by section to find the right place to make the second split. Some work still needs to be done to make splitAtParam' behave sensibly for parameters outside the range and we should decide if we should include splitAtParam' in the class as it could be useful in general. Finally, we should consider if we want to bake the mapping into the SegTree data structure to make the split parameter space behave linearly.
Makes sense to me. Maybe it's worth a Haddock comment explaining the relationship between the parameters before & after splitting? Maybe on the instance for |
I'm putting this here as it is convenient at the moment and it may be a while before I can come back to this and I don't want to forget the points. I'm not convinced of the points I'm making here either, but I think I need to talk out loud about them. Some choices that we could make for parameterization are to have bounds outside of [0,1], bounds always exactly [0,1], and/or parameters linearly related to distance along the curve. While the last property is nice and useful for the user, I would argue that we need to have parameterizations that match the "native" form for each sort of segment. These are not so much for the user as they are for the algorithm designer. It would be devastating to performance and accuracy if you had to do the sort of things that we do to approximate length along the curve for all the kinds of parametric things we want to do with trails. Given that we need this native view of our segments, we should optimize for use in algorithms. This is where I lean toward having the [0,1] bounds. Otherwise, you are constantly reaching for the lower and upper bound functions to adjust the step of the algorithm rather than just working with a normalized subcurve. But normalizing is throwing away some information in this case.
If you have four segments and you split in the middle of segment Another phenomenon I have seen in the trails that I tend to work with is that shorter segments tend to have a smaller radius of curvature and longer segments have large radii. This fits nicely with normalizing as it focuses samples (when sampling evenly) on the detailed features. None of this removes the need for having the mapping from parameter space to the subspaces parameter space. I feel like there might be a cleaner way to do it then including Anyway, discuss. |
I think this should be merged as it does the minimal thing to fix |
Reparameterize section
This is a minimal way to fix #217, putting this here for discussion.