-
-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cstwMPCagent reinvented into HARK? #669
Comments
It doesn't look like the cstwMPC code would run at all. This looks wildly
out of date, with references to variables that haven't been used since 2015.
…On Thu, Apr 30, 2020 at 12:33 PM Sebastian Benthall < ***@***.***> wrote:
cstwMPCagent is depended on by the Uncertainty and the Saving Rate DEMARK.
It has some special properties that should be thought about--how to
include it in a way tha seems less specialized?
BufferStockAgent?
https://github.com/llorracc/cstwMPC/blob/master/Code/Python/cstwMPC.py#L21
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#669>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFLWALVGT4UN4LHHF33RPGR3ZANCNFSM4MVXZZWQ>
.
|
Where is this repo produced from https://github.com/llorracc/cstwMPC/ ? @llorracc [assuming this repo is also autogenerated] Before removing, I tried to match last update to HARK/cstwMPC with llorracc/cstwMPC. Should I send in a new PR to llorracc/cstwMPC to override all the files which used to be in HARK/cstwMPC? |
I think so, yes. The version in llorracc doesn't even use packaged HARK.
It's the first Python version of cstwMPC I wrote, from late 2015 or early
2016. That version has been almost entirely replaced.
…On Fri, May 1, 2020 at 4:54 PM Mridul Seth ***@***.***> wrote:
Where is this repo produced from https://github.com/llorracc/cstwMPC/ ?
@llorracc <https://github.com/llorracc> [assuming this repo is also
autogenerated]
Before removing, I tried to match last update to HARK/cstwMPC with
llorracc/cstwMPC.
Last commit to llorracc/cstwMPC is April 12, 2019. I moved the changes
after April 12, 2019 to HARK/cstwMPC to the new repo llorracc/cstwMPC.
Should I send in a new PR to llorracc/cstwMPC to override all the files
which used to be in HARK/cstwMPC?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFKVZT6JEHIW3QNKA4DRPMZHJANCNFSM4MVXZZWQ>
.
|
Yes, the public llorracc/cstwMPC is generated, from a private repo to which
I have just invited Mridul and Seb, llorracc/cstwMPC-Ur
My guess is that the reason the files are out of date is that this was
meant to be the “archival” version that reproduces exactly what was
produced in the published paper.
Presumably subsequent changes (like, using the packaged HARK), were made in
the cstwMPC directory that we have recently removed from main-HARK.
Per our discussions, maybe what we should do is:
1. Restore whatever was the last version of the cstwMPC content before
it was removed.
2. Rename it to, say, BufferStockEconomy
- Do a search-and-replace for all occurrences of the string “cstwMPC”
and replace them with “BufferStock”
3. Turn it into a REMARK
and then at some point we will find the time to extract the parts of it
that are general purpose tools and put them back into HARK.
On Fri, May 1, 2020 at 4:56 PM Matthew N. White <notifications@github.com>
wrote:
… I think so, yes. The version in llorracc doesn't even use packaged HARK.
It's the first Python version of cstwMPC I wrote, from late 2015 or early
2016. That version has been almost entirely replaced.
On Fri, May 1, 2020 at 4:54 PM Mridul Seth ***@***.***>
wrote:
> Where is this repo produced from https://github.com/llorracc/cstwMPC/ ?
> @llorracc <https://github.com/llorracc> [assuming this repo is also
> autogenerated]
>
> Before removing, I tried to match last update to HARK/cstwMPC with
> llorracc/cstwMPC.
> Last commit to llorracc/cstwMPC is April 12, 2019. I moved the changes
> after April 12, 2019 to HARK/cstwMPC to the new repo llorracc/cstwMPC.
>
> Should I send in a new PR to llorracc/cstwMPC to override all the files
> which used to be in HARK/cstwMPC?
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#669 (comment)>, or
> unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/ADKRAFKVZT6JEHIW3QNKA4DRPMZHJANCNFSM4MVXZZWQ
>
> .
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAKCK774YCTQ35XPGHPBLL3RPMZQTANCNFSM4MVXZZWQ>
.
--
- Chris Carroll
|
The old cstwMPC code that was in HARK is available in any earlier release: I'm not sure what all the renaming accomplishes. I had been under the impression until recently that the cstwMPC REMARK already existed and was located at https://github.com/llorracc/cstwMPC Now it sounds like you are telling me that there is another layer of intrigue--that the cstwMPC I thought I knew was a mere decoy, and the real cstwMPC was hidden away in a private repo all along. Obviously, if you hide the "real code" in a private repo, nobody is going to update it. The rationale for having a generator repository is opaque to me. I recommend reducing the complexity of the number of repositories and classes in play. Why not:
|
Ok, Chris has convinced me privately that he was right, and I was wrong. I think I did not originally see that his proposal included implicitly the deprecation of the current cstwMPC REMARK code. The new REMARK will become the Single Source of Truth for this functionality. I suppose this new REMARK will go in the REMARK repository, not in a separate standalone repository. The remaining issue is the name. I gather than what makes cstwMPC agents special is their awareness of the aggregate economy conditions. What if the new REMARK was called MarketAwareness. The agent type could be a MarketAwareAgentType. Just spitballing here. |
There is nothing special about cstwMPC agents. There is no new solution
code in there. There is no new model. The version(s) of the model in
cstwMPC that use aggregate shocks simply use AggShockConsumerType. The
extensions to CobbDouglasEconomy are literally just for calculating some
statistics for the cstwMPC paper. Being aware of the aggregate economy is
part of a core HARK class.
…On Tue, May 5, 2020 at 8:55 AM Sebastian Benthall ***@***.***> wrote:
Ok, Chris has convinced me privately that he was right, and I was wrong.
I think I did not originally see that his proposal included implicitly the
deprecation of the current cstwMPC REMARK code. The new REMARK will become
the Single Source of Truth for this functionality.
I suppose this new REMARK will go in the REMARK repository, not in a
separate standalone repository.
The remaining issue is the name. I gather than what makes cstwMPC agents
special is their awareness of the aggregate economy conditions.
What if the new REMARK was called MarketAwareness. The agent type could be
a MarketAwareAgentType. Just spitballing here.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFKU4T2XYYGX7WLVXI3RQAEDNANCNFSM4MVXZZWQ>
.
|
Thank you @mnwhite that is very clarifying. In that case, I'd argue that the HARK/cstwMPC code should be moved to HARK/examples/cstwMPC and rewritten to use AggShockConsumerType more explicitly. The extensions could be monkey patched into the instances created in the examples. Alternatively, this rewrite could be made into a REMARK. If we are trying to help people build similar models, we should be directing users to the HARK classes, not the cstwMPC code. |
Oh, I see how this works now: https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L27-L34 So really we are talking about the fate of a small bit of code wrapping the library functionality. What is this code for? It is not up to date with HARK master and breaking for downstream users. Should they remove this code block and just use the inherited method? |
cstwMPC uses AggShockConsumerType very explicitly, right at the top of its
main file. It makes a custom subclass from IndShockConsumerType or
AggShockConsumerType because there are different versions of the model; the
same small modifications need to be made either way. But looking at it
now, it looks like there are some outdated names; kGrid doesn't exist as an
attribute anymore.
…On Tue, May 5, 2020 at 9:32 AM Sebastian Benthall ***@***.***> wrote:
Thank you @mnwhite <https://github.com/mnwhite> that is very clarifying.
In that case, I'd argue that the HARK/cstwMPC code should be moved to
HARK/examples/cstwMPC and rewritten to use AggShockConsumerType more
explicitly.
The extensions could be monkey patched into the instances created in the
examples.
Alternatively, this rewrite could be made into a REMARK.
The one tricky thing about renaming the REMARK, now that I think about it,
is that the REMARK is about reproducing a paper---in this case, the cstwMPC
paper.
If we are trying to help people build similar models, we should be
directing users to the HARK classes, not the cstwMPC code.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFMRCGHYFPS4BJFDL3LRQAIOXANCNFSM4MVXZZWQ>
.
|
Yeah, just delete that entire codeblock. It was there because there's a
weird quirk in how the model was written on paper, which doesn't *exactly*
fit with how the model works in HARK. It's an extremely minor difference
in how taxes to fund unemployment benefits are calculated.
…On Tue, May 5, 2020 at 9:45 AM Sebastian Benthall ***@***.***> wrote:
Oh, I see how this works now:
https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L27-L34
So really we are talking about the fate of a small bit of code wrapping
the library functionality.
Some of that extra code--the Lorenz share stuff--can be rewritten to use
the library code.
What is this code for? It is not up to date with HARK master and breaking
for downstream users. Should they remove this code block and just use the
inherited method?
https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L51-L74
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFJW5CUMTGQBZYDLTBDRQAJ6JANCNFSM4MVXZZWQ>
.
|
That is interesting. |
Yeah. In this case, we would want more customizability in the income
process. What you see here is a piecemeal / patchwork of several legacy
systems. It is bad.
…On Tue, May 5, 2020 at 9:58 AM Sebastian Benthall ***@***.***> wrote:
That is interesting.
This is a nice use case for exploring how we might make HARK more
extensible to support minor model differences with less custom code.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFLUIKYSAVICFHND72LRQALRFANCNFSM4MVXZZWQ>
.
|
Here's the downstream user that Seb mentioned :) |
@frankovici thank you! As you can see from this thread, the code is currently in flux. |
I disagree with this for two reasons:
1) It's project-specific to cstwMPC
2) Even if it weren't, the method used in cstwMPC was extremely slow and
inefficient, and there is a much better way of accomplishing the same
thing. This isn't something we should want other people to use.
…On Thu, May 14, 2020 at 11:47 AM Sebastian Benthall < ***@***.***> wrote:
@llorracc <https://github.com/llorracc> says that
cstwMPC.findLorenzDistanceAtTargetKY and cstwMPC.getKYratioDifference
should be in HARK.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFKPMKXIIQPQ6NJIA2TRRQHCJANCNFSM4MVXZZWQ>
.
|
It is not at all specific to cstwMPC. Lots of people are now estimating the degree of heterogeneity necessary to achieve a given dispersion of assets. Dirk Krueger's handbook of macro paper is one example. The fiscal policy paper Edmund and I are working on with Norwegians is another. It is a generically useful thing to be able to do. If there's a better way to do it, that's fine; but I do not want to let the best be the enemy of the adequate here. And I want to have the FriedmanBufferStockEconomy or whatever we want to call it be a core tool in HARK going forward: One in which you find the distribution of parameters such that you match a target (like the wealth distribution). |
Can you explain what you mean by the FriedmanBufferStockEconomy? I think
of that as a model in which there are permanent and transitory aggregate
shocks, permanent and transitory idiosyncratic shocks, and the interest and
wage rates determined as the marginal product of capital and labor (i.e.
competitive factor markets). That's in HARK already as
AggShockConsumerType and CobbDouglasEconomy, and its extension
AggShockMarkovConsumerType and CobbDouglasMarkovEconomy.
…On Thu, May 14, 2020 at 12:10 PM Christopher Llorracc Carroll < ***@***.***> wrote:
It is not at all specific to cstwMPC. Lots of people are now estimating
the degree of heterogeneity necessary to achieve a given dispersion of
assets. Dirk Krueger's handbook of macro paper is one example. The fiscal
policy paper Edmund and I are working on with Norwegians is another. It is
a generically useful thing to be able to do.
If there's a better way to do it, that's fine; but I do not want to let
the best be the enemy of the adequate here. And I want to have the
FriedmanBufferStockEconomy or whatever we want to call it be a core tool in
HARK going forward: One in which you find the distribution of parameters
such that you match a target (like the wealth distribution).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFJSK4NHOE5UKLEZKUTRRQJVRANCNFSM4MVXZZWQ>
.
|
The crucial extra part that is NOT in there is the part that makes it match Friedman (1963): an MPC of 33 percent (or thereabouts). That is, the part that Seb proposed to add: The ability to match a lorenz distribution of wealth to a distribution of parameters. So far as either of us can tell, that is NOT in current HARK. |
Putting aside the current implementation (which I agree with @mnwhite could be improved a lot), I want to be clear on just what An example of its use is cell [8] here: The current code for this method is here: If I understand correctly, the point of this code is to be able to say:
In the DemARK example, the parameter that is distributed over is the Discount Factor (not the wealth, though I understand why these are connected). I agree with @mnwhite that the current implementation, as a method on a Market subclass, is quite odd. It seems like it would be more clean to have this be a way you could choose to parameterize a model. So, rather than assigning a number to Rather, the user could assign a distribution to that parameter:
Then each agent could sample from this distribution to get their discount factor. (Or you could try to use the 'exact match' mechanic here, I suppose). |
The distribution of the parameter is at the AgentType level, not the agent
level. That's why it's a discretized distribution: there are 7 types of
agents with these 7 discount factors, approximating a uniform distribution
over a given range. For each discount factor in the model, a microeconomic
problem needs to be solved by backward iteration. I actually don't think
it's that weird to have this as a method in a Market subclass, but we can
change it.
In my comments about being slow, I was referring to our numeric method for
fitting the Lorenz curve while maintaining an *exact* match for the
aggregate capital to income ratio. The function tries to minimize the
distance between simulated and actual Lorenz curves by manipulating *one*
parameter (nabla), the half-width of the uniform distribution. For any
proposed nabla, it repeatedly solves and simulates the models for different
values of the center of the parameter distribution (beta grave, in our
notation) to find the one matches the K/Y ratio *exactly*. It then
computes the "Lorenz distance" at that K/Y-matching parameter value and
returns it.
This is extremely inefficient. The model is solved and simulated (and
results computed) ~10-15 times for each nabla that's considered. In the
closed economy variants, the economy needs to be solved and simulated ~6-8
times for each of those 10-15 beta-graves that are considered, as it also
needs to find the general equilibrium using our Krussell-Smith-like
method. All for the sake of saying the model only has one parameter,
because the other would-be free parameter governs one moment, which is
matched exactly.
That's (essentially) mathematically equivalent to estimating a two
parameter model by the simulated method of moments while putting infinite
weight on fitting the aggregate K/Y ratio relative to fitting the Lorenz
curve. Any minimizing method would refuse to make progress in the nabla
direction unless it was absolutely sure that it wasn't hurting its fit of
K/Y. It's an extreme version of the Rosenbrock banana function
<https://en.wikipedia.org/wiki/Rosenbrock_function>: too much weight on one
part of the objective function relative to others. The obvious solution
with Rosenbrock is to turn down that weight (the $b$ parameter in the Wiki
article), minimize, turn it up, and repeat.
We can do the same thing in cstwMPC. Rather than demanding that K/Y ==
10.260000, just make it a target moment with the same weight as each of the
Lorenz points, and estimate it by Nelder-Mead (or a better ND minimizer).
Then turn up the weight on the K/Y moment to 5x that of the Lorenz points
and estimate again, using the original estimate as a starting point. Then
do it again for 25x, 125x, 625x. Or go by powers of 10 and do 1, 10, 100,
1000. I did this a while back.
Putting 1000x more weight on K/Y relative to the Lorenz points will leave
you with a final simulated K/Y of 10.260134 (or whatever) rather than
10.260000... but the Mathematica code (on which the Python had to be based)
only gets you to 10.258 or so. So we say we put infinite weight on the K/Y
ratio and use an extremely inefficient, wasteful algorithm because of
that... and then turn its tolerance so high (because it takes so long to
run) that it doesn't even match 2 decimal places.
…On Fri, May 15, 2020 at 8:17 AM Sebastian Benthall ***@***.***> wrote:
Putting aside the current implementation (which I agree with @mnwhite
<https://github.com/mnwhite> could be improved a lot), I want to be clear
on *just* what distributeParams was supposed to be doing.
An example of its use is cell [8] here:
https://github.com/econ-ark/DemARK/blob/0.10.5/notebooks/Uncertainty-and-the-Saving-Rate.ipynb
The current code for this method is here:
https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L240
If I understand correctly, the point of this code is to be able to say:
- For a given model, for a given agent parameter, and a given
mathematical distribution...
- ... make it so that the agents, collectively, have the parameter
distributed accordingly.
In the DemARK example, the parameter that is distributed over is the
Discount Factor (not the wealth, though I understand why these are
connected).
I agree with @mnwhite <https://github.com/mnwhite> that the current
implementation, as a method on a Market subclass, is quite odd.
It seems like it would be more clean to have this be a way you could
choose to parameterize a model.
So, rather than assigning a number to DiscFac in the parameters
dictionary as is done here:
https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py#L1585
Rather, the user could assign a distribution to that parameter:
'DiscFac' : Uniform(mu=1.0065863855906343, sigma=0.0019501105739768)
Then each agent could sample from this distribution to get their discount
factor. (Or you could try to use the 'exact match' mechanic here, I
suppose).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFMYKTIX2JEGULUDPZTRRUXE5ANCNFSM4MVXZZWQ>
.
|
Oh, I see. A Market has a list of AgentTypes. ... In that case, why not just move the [I would change the 'exec' statement to something else, but that's minor] |
Also, since we now have a way of specifying a discretized distribution object, it would be easy enough to have that passed in directly. |
We could, but it should probably be generalized, or at least improved. As
is, it can only handle one parameter... or if you do call it twice, the
agent types will vary on those parameters colinearly. Maybe that's what the
user meant, maybe not.
…On Fri, May 15, 2020 at 9:31 AM Sebastian Benthall ***@***.***> wrote:
The distribution of the parameter is at the AgentType level, not the agent
level.
Oh, I see. A Market has a list of AgentTypes.
https://github.com/econ-ark/HARK/blob/master/HARK/core.py#L851
... In that case, why not just move the distributeParams method in as-is?
[I would change the 'exec' statement to something else, but that's minor]
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFKH6GJBLJ2L7PACOE3RRU73XANCNFSM4MVXZZWQ>
.
|
I hesitate to say this, but I think a more general treatment of this suggests a deeper consideration of the HARK architecture that goes well beyond the scope of bringing in the features @llorracc wants in the short term. For example, there seems to be some ambiguity in the design, where it's possible to simulate many agents with different levels of wealth with a single AgentType instance, but not many agents with different discount factors. If I'm reading this right, it's impossible for HARK to simulate heterogenous agents unless they are interact through a market. But I gather that @llorracc is becoming interested in ergodic distributions with heterogeneous agents in models without market interactions. And even if the only "interesting" cases do involve market mechanisms, being able to do such a simple (but still heterogeneous) simulation cleanly would be useful as a test case. |
I've made #692 to make it this particular aspect of this issue more concrete. |
We can move it as is and worry about expanding later.
As for heterogeneity, the distinction here is between ex ante vs ex post
heterogeneity. An instance of an AgentType subclass represents a
collection of ex ante homogeneous agents-- they are all of the same
"type". They share the exact same preference parameters (including their
discount factor), face the same distribution of risks, and experience the
same transitions between states (conditional on controls and shocks). If
they started in the same state and were given identical shock sequences,
two agents in the same AgentType instance would behave identically. They
are ex ante homogeneous, but end up ex post heterogeneous because they
actually get *different* idiosyncratic shock draws over time.
Ex ante heterogeneity in HARK is captured by having different AgentType
instances in the same setting. Maybe they share the same class, maybe they
don't, but *something* about the agents is different before anything
"happens" in the model: they have different preferences, or a different
concept of the world they live in, or are an entirely different kind of
agent (a worker vs a firm vs a bank, say). Ex post heterogeneity just
requires setting the AgentCount attribute in any AgentType instance to be
greater than 1. That's it.
TLDR: You can't simulate agents with different discount factors in the same
AgentType instance because that's a contradiction of the terminology.
You can simulate ex ante heterogeneous agents within a single AgentType
instance; that's the basic use case, in fact. The concept of an ergodic
distribution for wealth definitely belongs at the AgentType level;
similarly, calculating the ergodic distribution for discrete variables like
age (t_age and t_cycle in HARK notation) should live in AgentType
subclasses. The latter is a much, much easier problem mathematically.
Like, not a problem.
…On Fri, May 15, 2020 at 9:50 AM Sebastian Benthall ***@***.***> wrote:
I hesitate to say this, but I think a more general treatment of this
suggests a deeper consideration of the HARK architecture that goes well
beyond the scope of bringing in the features @llorracc
<https://github.com/llorracc> wants in the short term.
For example, there seems to be some ambiguity in the design, where it's
possible to simulate many agents with different levels of wealth with a
single AgentType instance, but not many agents with different discount
factors.
If I'm reading this right, it's impossible for HARK to simulate
heterogenous agents *unless* they are interact through a market. But I
gather that @llorracc <https://github.com/llorracc> is becoming
interested in ergodic distributions with heterogeneous agents in models
without market interactions. And even if the only "interesting" cases *do*
involve market mechanisms, being able to do such a simple (but still
heterogeneous) simulation cleanly would be useful as a test case.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFM3WPJJI6KVG7JHI53RRVCBVANCNFSM4MVXZZWQ>
.
|
As an aside...I know it's probably baked in at this point, but having a set of classes all having "Type" at the end is a bit redundant, from a programming perspective. Since a class defines a type. I'm sure you know that and this is just flagging an artifact. And I see what you are saying about ex post and ex ante heterogeneity, which is extremely clarifying:
If I am not mistaken, there is currently no support for simulation of the behavior of ex ante heterogeneous models without market interaction. |
No, this use of Type is intentional. The word Type is not referring to
what this kind of agent's model is, but instead what their ex ante
homogeneous "type" is. There can be many instances of
IndShockConsumerType: the type that has DiscFac=0.96, the type that has
DiscFac=0.92, etc. I intentionally did not call the superclass Agent, nor
the model classes IndShockConsumer (e.g.) because instances of these
classes represent a *type* of agents, and there might be many agents of
that type.
There is support in HARK for simulating the behavior of ex ante
heterogeneous agents without market interaction:
`MyTypes = [ThisType, ThatType, OtherType]`
`multiThreadCommands(MyTypes, ['initializeSim()','simulate()']`
…On Fri, May 15, 2020 at 10:30 AM Sebastian Benthall < ***@***.***> wrote:
As an aside...I know it's probably baked in at this point, but having a
set of classes all having "Type" at the end is a bit redundant, from a
programming perspective. Since a class defines a type. I'm sure you know
that and this is just flagging an artifact.
And I see what you are saying about ex post and ex ante heterogeneity,
which is extremely clarifying:
- ex post heterogeneity is modeled by having AgentCount > 1 on a since
AgentType instance
- ex ante heterogeneity is modeled by having multiple AgentType
instances
If I am not mistaken, there is currently no support for simulation of the
behavior of ex ante heterogeneous models without market interaction.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFIIF3EVL2VZUWXMQILRRVGZJANCNFSM4MVXZZWQ>
.
|
Just for the record... I defer to @mnwhite on the K/Y and Lorenz curve fitting issues, which at this point are over my head. @llorracc I recommend decoupling these topics. I'll work to get distributeParams in, which will make it possible to release an updated "Uncertainty" DemARK with the discount factor distribution precomputed. Later, when @mnwhite has a more efficient curve algorithm in HARK, the DemARK can be updated with the option to use it. In the meantime, you can use the 0.10.5 version of the DemARK for your more flexible uses. Does that sound OK? |
OK. And am I right in thinking that the reason for this is the presumed performance benefit of simulating the agents all at once?
Ah, interesting. I'll have to look more carefully at this functionality. |
There's a definite performance benefit in simulation-- you get to use array
operations rather than working on each agent independently. But
structurally it's there because all agents of the same type can be *solved*
simultaneously. If I have 10,000 agents that all have the exact same
problem and exact same parameters, I don't solve a backward induction loop
10,000 times. I solve the model *once* for all of them.
…On Fri, May 15, 2020 at 10:41 AM Sebastian Benthall < ***@***.***> wrote:
because instances of these classes represent a *type* of agents, and
there might be many agents of that type.
OK. And am I right in thinking that the reason for this is the presumed
performance benefit of simulating the agents all at once?
There is support in HARK for simulating the behavior of ex ante
heterogeneous agents without market interaction: MyTypes = [ThisType,
ThatType, OtherType] multiThreadCommands(MyTypes,
['initializeSim()','simulate()']
Ah, interesting. I'll have to look more carefully at this functionality.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFPGU5SDA2XZ6K35MITRRVIBTANCNFSM4MVXZZWQ>
.
|
Ok. I'm trying to get a working PR in for The original code for that method depends on a I'm unclear about its role and whether it should be preserved as part of the new patch. |
This represents the total number of agents, across all types. It's
probably one of the features that makes distributeParams particular to
cstwMPC rather than general. I'd say remove it for the general version.
…On Fri, May 15, 2020 at 11:00 AM Sebastian Benthall < ***@***.***> wrote:
Ok. I'm trying to get a working PR in for distributeParams and am hung up
on a new issue.
The original code for that method depends on a Population parameter on
the market instance.
This appears to be sui generis to cstwMPC, only defined here:
https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/SetupParamsCSTW.py#L262
I'm unclear about its role and whether it should be preserved as part of
the new patch.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#669 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFO3CA4AMLAM6BJ5EQLRRVKKVANCNFSM4MVXZZWQ>
.
|
Ok, got it. let's continue this discussion on the PR, which is now ready for review |
cstwMPCagent is depended on by the Uncertainty and the Saving Rate DEMARK.
It has some special properties that should be thought about--how to include it in a way tha seems less specialized?
BufferStockAgent?
https://github.com/llorracc/cstwMPC/blob/master/Code/Python/cstwMPC.py#L21
The text was updated successfully, but these errors were encountered: