-
-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Array-valued discrete distribution #1146
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1146 +/- ##
==========================================
+ Coverage 74.07% 74.09% +0.01%
==========================================
Files 70 70
Lines 10809 10823 +14
==========================================
+ Hits 8007 8019 +12
- Misses 2802 2804 +2
Continue to review full report at Codecov.
|
Here is an example of why I would find this useful. Consider the portfolio model with sticky "shares". In that model, the end-of-period variables that you need to figure out your state vector next period are HARK/HARK/ConsumptionSaving/ConsPortfolioModel.py Lines 679 to 682 in 2155175
What you then want to do is compute the expectation of various functions (
and then use The issue with this approach would be that we would need to have one An alternative would be a single |
I see what you want to do here, but I don't think it's the right approach
to solving it. Before I stepped back, I wrote a version of
calc_expectations that would do more or less what you wanted in one step.
It would compute E[f(X,Y)] where f is passed as an argument, X represents
(array-valued) fixed inputs, and Y is a (possibly multivariate)
distribution object; the output would be the same shape as X.
I don't know if it was ever merged in, but I remember that Sebastian didn't
like it. Part of the dispute was over the order of arguments-- he thought
the function should be an optional argument, when the whole point of it is
that you're taking the expectation *of an expression* over some
distribution.
…On Wed, May 18, 2022 at 10:57 AM Mateo Velásquez-Giraldo < ***@***.***> wrote:
Here is an example of why I would find this useful.
Consider the portfolio model with sticky "shares". In that model, the
end-of-period variables that you need to figure out your state vector next
period are aNrm and Share. Both are, or can be, choices and so what you
do is you have big tiled arrays with all combinations of your grids,
aNrm_tiled and ShareNext.
https://github.com/econ-ark/HARK/blob/2155175df60eaff1ee230a55e359bdd8d9ddd008/HARK/ConsumptionSaving/ConsPortfolioModel.py#L679-L682
What you then want to do is compute the expectation of various functions (
v, dvdm, dvds) of next period's state ((m,Sharenext)) conditional on
every combination of this period's (aNrm,Share). One way to do this that
would move us in the direction of using the same transition
equations/functions in the solution and simulation of the model would be to
compute the distribution of (m,ShareNext) conditional on every (a,Share)
using something like
state_next_dstn = dist_of_function(dstn = ShockDstn, func = lambda shocks:
transition(shocks, a, Share))
and then use calc_expectations to on state_next_dstn to find the
expectation of v, dvdm, dvds.
The issue with this approach would be that we would need to have one
DiscreteDistribution object representing the distribution of (m,ShareNext)
for *every* combination of (a,Share) and would need to take the
expectations one by one.
An alternative would be a single DiscreteDistribution object whose
realizations X would be matrices of the same size as the "tiled" arrays.
Then, X[i,j] would be interpreted as "next period's state conditional on
the fact that you chose the i-th a and j-th share and whatever shock was
realized." This would be a much more practical object to handle and take
expectations over. The issue is that the current discrete distributions do
not allow matrix-valued random variables, and that is what this PR wants to
change.
—
Reply to this email directly, view it on GitHub
<#1146 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFJXJNRYU22X7DAQFY3VKUAMTANCNFSM5WC65QUQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
This looks good! |
Thanks @mnwhite. I think @sbenthall has already merged a What I'm going for looks something like this. |
Yes, I *think* calc_expectations should do that in a combined line, at
least as I had envisioned it. Stepping back from the portfolio choice model
to the basic model, consider computing end of period value and marginal
value of assets:
v_func_temp = lambda psi, theta, a : DiscFac * (PermGroFac*psi)**(1.-CRRA)
* vFunc_next(Rfree/(PermGroFac*psi) * a + theta)
dvda_func_temp = lambda psi, theta, a : DiscFac * Rfree *
(PermGroFac*psi)**(-CRRA) * dvdmFunc_next(Rfree/(PermGroFac*psi) * a +
theta)
end_of_period_v = calc_expectation(v_func_temp, income_shock_dstn,
aNrm_now_grid)
end_of_period_dvda = calc_expectation(dvda_func_temp, income_shock_dstn,
aNrm_now_grid)
I might be getting the order of inputs wrong, but it's roughly that. In
this case (as well as the portfolio choice model), just having the
distribution of future states (m_{t+1}) isn't enough to compute
end-of-period (marginal) value of assets. Because (marginal) value is
scaled by (a power of) permanent income growth, the shock distribution
still has to be known as well when taking those expectations. That's why I
made (my version of) calc_expectation be able to take entire arrays of the
function inputs that *aren't* stochastic.
…On Wed, May 18, 2022 at 1:24 PM Mateo Velásquez-Giraldo < ***@***.***> wrote:
Thanks @mnwhite <https://github.com/mnwhite>. I think @sbenthall
<https://github.com/sbenthall> has already merged a calc_expectations
that can be given a function as a input. If not given, it finds the
expectation of the distribution itself.
What I'm going for looks something like this.
state_next_dstn = dist_of_function(shock_dstn, lambda x: transition(x,
post_state_now))
end_of_period_Ev = calc_expectation(state_dstn_next, vFunc_next)
end_of_period_Edvdm = calc_expectation(state_dstn_next, dvdmFunc_next)
end_of_period_Edvds = calc_expectation(state_dstn_next, dvdsFunc_next)
—
Reply to this email directly, view it on GitHub
<#1146 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFIRQILBUAU4XYD3MALVKURTRANCNFSM5WC65QUQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@mnwhite you are completely right that to take the expectation one would need to work with the permanent shock too. The way I handle that in a proof of concept for the portfolio model was to make the permanent shock part of the "state" of which we are finding the distribution at this stage. It is only a "state" for taking the expectation, I'm not saying that I made it an actual state. Still, it does make things slightly slower. In terms of speed, the specialized code that we have for each particular model will always win. But I still think the tool is valuable for things like expressing a model with a transition (or a belief about a transition) that can be switched for another with the least pain possible. Having the option won't hurt. |
That is not the sole purpose of the PR, however. Even if my transition setup is not what we want to achieve, enforcing discrete distributions' values to be an
|
Base tests are passing. I still need to
|
This is ready for review (or further thoughts). |
@sbenthall I'll review this. |
# numpy is weird about 1-D arrays. | ||
dstn_array = dstn_array.T | ||
|
||
f_query = np.apply_along_axis(func, 0, dstn_array, *args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sbenthall keep the behavior of *args in mind when reviewing. It has changed!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A quick search does suggest that map
with lambda functions is quite slow.
Do you think there is a better, faster way to do it? The issue I was trying to get around is that np.apply_along_axis
requires the function's inputs to be 1-d
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like list comprehensions would work better:
https://stackoverflow.com/questions/1247486/list-comprehension-vs-map
Let me run some tests before merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like np.apply_over_axes
is numpy's function to apply over more than one dimension
https://numpy.org/doc/stable/reference/generated/numpy.apply_over_axes.html#numpy.apply_over_axes
Would that fit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
That certainly looks like something I could use. The only issue I see is that it does not accept extra arguments, like np.apply_along_axis
. I could adapt it with a lambda function, not sure if it will make it lose too much performance.
I'll run some tests and report on Wed. Thanks!
@@ -1206,7 +1206,7 @@ def post_return_derivs(inc_shocks, b_aux, g_aux, s): | |||
|
|||
# Define grids | |||
b_aux_grid = np.concatenate([np.array([0.0]), Rfree * aXtraGrid]) | |||
g_aux_grid = np.concatenate([np.array([0.0]), max(RiskyDstn.X) * nNrmGrid]) | |||
g_aux_grid = np.concatenate([np.array([0.0]), max(RiskyDstn.X.flatten()) * nNrmGrid]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like a lot of extra code is now needed to flip vertical vectors/arrays into horizontal ones.
Would it be possible to have the arrays oriented horizontal in the DiscreteDistribution object?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possible yes, but I still thought this would be the path of least resistance.
I think other parts of HARK that deal with multivariate distributions use things like dstn.X[n]
to find all the possible draws of the nth dimension. This seems like an intuitive and compact code pattern that I would like to keep. It was easier for me to conceptually think that nature was always the last dimension.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"X" is truly a terrible name for an object property that is doing so much work. See #1051
I'm not suggesting you replace "X" with something else in this PR, but I wonder what mathematical object X is now.
It's possible that we could have multiple ways of accessing this data within a distribution, so it requires less manipulation before entering and after pulling it from the object.
|
||
# Construct and return discrete distribution | ||
return DiscreteDistribution( | ||
pmf, X, seed=self.RNG.randint(0, 2 ** 31 - 1, dtype="int32") | ||
pmf, X.T, seed=self.RNG.randint(0, 2 ** 31 - 1, dtype="int32") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this also seems like something the user of DiscreteDistribution would rather not have to keep track of.
Is the issue that a 1-dimensional array is now ambiguously a 1d distribution with N buckets, or an N-D distribution with 1 bucket?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, exactly.
May be multivariate (list of arrays). | ||
For multivariate distributions, the last dimension of X must index | ||
"nature" or the random realization. For instance, if X.shape == (2,6,4), | ||
the random variable has 4 possible realizations and each of them has shape (2,6). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it would be more consistent with the other downstream code if the 'nature' dimension was the first dimension, not the last.
I've checked it over. See inline comments. Summary:
|
The current implementation of
DiscreteDistribution
allows us to represent multi-variate distributions in the sense that it can represent random variables withn
dimensional domains (vector valued).Something that it does not allow is for us to represent random variables whose realizations are arrays of arbitrary dimensions. Imagine, for instance, a random variable
X
that could benp.array([[1,2],[3,4]])
with probability 0.5 ornp.array([[0,0],[0,0]])
with probability 0.5. The realizations of X have shape(2,2)
. We can not representX
directly---we can in principle represent the distribution ofX.flatten()
and do the required reshapes later, but this is cumbersome.This PR works towards allowing random variables whose realizations are arrays of arbitrary dimensions. I will explain why I think that is useful in a comment below.
Please ensure your pull request adheres to the following guidelines: