-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HQ exponential decay discrepancy #1104
Comments
After talking with @phargogh about this I think it's worth having a chat with @lmandle and @newtpatrol to get a sense of what we expect the model to be doing for the decay of threats over distance. The UG currently describes threats being decayed according to a Euclidean distance transform, but the model is actually using Convolutions and a weighted distance decay kernel in implementation. |
Added euclidean distance transform methods for filtering (decaying) threat rasters to test against the convolution implementation. Also tested an exponential decay kernel with an add 2.99 constant as presented in the User's Guide.
Introducing a new location in the repository to track major design decisions. ADR stands for Architecture Decision Records and is seemingly a growing standard some software projects are using to record major decisions. This is most commonly used after an RFC process. Since a lot of our RFC process is via Slack, Google Docs, or other methods because we interact with non-developers (our science colleagues), I felt like capturing the decision endpoint with a recap summary was the most efficient process.
Removing the convolution approach for decay threats over distance and replacing with a simpler euclidean distance transform approach.
After talking things over with the science team (Lisa, Stacie, Jade) we decided to switch to a simpler euclidean distance implementation and to update the User's Guide with why the 2.99 constant is being used. The convolution implementation has been around since InVEST 3.3.0 but not reflected in the User's Guide. When thinking about writing the convolution implementation into the User's Guide I could not convince myself or defend the approach as currently implemented. There could be a reason (or bug) for why the filtered threats produce non intuitive degradation and quality outputs. As far as the 2.99 constant, Lisa noted:
|
Looks like this was fixed in #1159. |
James mentioned that there is a discrepancy in how exponential decay is handled between source and the UG.
What the source code is doing:
exp(-dist / max_dist)
What the UG says it should be doing:
exp(-(2.99 / max_dist) * dist)
This came up from natcap/pygeoprocessing#268 (comment).
The text was updated successfully, but these errors were encountered: