-
-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
generalize configuration of income process #673
Comments
Incidentally, I have long felt that our default should be using a lognormal truncated at some (large) number of standard deviations (like, 5). This would ameliorate (though not fix) the fact that as we increase the number of gridpoints, the minimum possible realization moves substantially. This matters because a good bit of the logic of the model depends on the value of the minimum possible realization. An even better approach would be to institute an infinitesimal probability of a "minimum income event" corresponding exactly to the truncation bound of the distribution. That would effectively eliminate the problem because the minimum possible income would remain the same as long as the truncation threshold remained the same, regardless of how many (or few) points were used in the approximation. |
I strongly, strongly endorse this. I think we've had this discussion?
But with Sebastian's in progress refactoring of our distributions as
objects (as they should be been all along), our UnivariateDistribution
class can simply have attributes for its minimum and maximum value. Right
now, HARK would correctly interpret the addition of a point mass with
machine epsilon probability at zero, but it looks a little funny to do it
that way. Much more clear to be able to ask for TranShkDstn.Xmin (I know
we're moving away from X).
…On Tue, May 5, 2020 at 2:20 PM Christopher Llorracc Carroll < ***@***.***> wrote:
Incidentally, I have long felt that our default should be using a
lognormal truncated at some (large) number of standard deviations (like,
5). This would ameliorate (though not fix) the fact that as we increase the
number of gridpoints, the minimum possible realization moves substantially.
This matters because a good bit of the logic of the model depends on the
value of the minimum possible realization.
An even better approach would be to institute an infinitesimal probability
of a "minimum income event" corresponding exactly to the truncation bound
of the distribution. That would effectively eliminate the problem because
the minimum possible income would remain the same as long as the truncation
threshold remained the same, regardless of how many (or few) points were
used in the approximation.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#673 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFLRSIGBKJY77M7DNILRQBKFVANCNFSM4MZTP4JA>
.
|
This issue is now quite complicated because it's not clear whether it refers to the original scope or the one CDC introduced in the comment. It looks like #804 is a more fleshed out version of the issues raised in the comment here. So this ticket refers to the original item now, |
Now redundant with #620 |
Most models now inherit the lognormal income process from ConsIndShock:
HARK/HARK/ConsumptionSaving/ConsIndShockModel.py
Line 2120 in 7dd38fc
HARK/HARK/ConsumptionSaving/ConsIndShockModel.py
Line 2581 in 7dd38fc
But some models have subtle changes to the process, requiring some overwriting of the model function:
https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L51-L74
It may be possible to design this so that the income process distributions are defined when a model is parameterized. This would lead to a more consistent API. See #620
The text was updated successfully, but these errors were encountered: