GithubHelp home page GithubHelp logo

econ-ark / distributionofwealthmpc Goto Github PK

View Code? Open in Web Editor NEW

This project forked from sbenthall/distributionofwealthmpc

2.0 2.0 8.0 30.76 MB

The Distribution of Wealth and the Marginal Propensity to Consume

TeX 4.83% PostScript 93.75% Shell 0.03% Python 0.55% Roff 0.01% Jupyter Notebook 0.84%

distributionofwealthmpc's People

Contributors

alanlujan91 avatar camriddell avatar dedwar65 avatar llorracc avatar mnwhite avatar mriduls avatar sbenthall avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

distributionofwealthmpc's Issues

Capitalization

I've made a mess of it...

I was trying to figure out why when I changed capitalization of directories locally, it didn't translate to github. Apparently github defaults to ignoring case.

I tried to fix this, but now there are two directories Code and code.

I'm not sure what is the best way to revert that doesn't destroy the changes I've made in the last months.

@MridulS @camriddell @sbenthall

An automated test that makes sure the results match the paper

Related to #1213

Create an automated test that makes sure the results match (within reason) the results of the paper.

This test can be ran automatically on the master branch (but it's computationally expensive so there is the possibility of GitHub Action timeouts). Or manually when the repository is updated.

Run sensitivity

What is this option meant to do?

run_sensitivity : [bool]
    Whether to run each of eight sensitivity analyses; currently inoperative.  
    Order: rho, xi_sigma, psi_sigma, mu, urate, mortality, g, R

Automated test of substantive results tied to HARK development branch

Continued from here: sbenthall#3 (comment)

@sbenthall: Because of the design of the original cstwMPC code, which involves a lot on non-standard code execution (using exec()) and file output (saving txt files with custom data), I designed this script to work with the files output to the Results/ directory. This means that it doesn't operate like a normal Python unit test of some part of the code. Rather, the test must be ran by hand to verify results.

@llorracc : I'd really like to get this into some form that can be run automatically when we update the development branch of the HARK toolkit. Whether that requires something in the form of a unit test, I don't know, but my goal is to choose a small number of REMARKs that are "unpinned" because they give a thorough workout to the substantive, quantitative results of the toolkit and any code merge that changes those substantive results needs to be closely scrutinized to understand why.

There are a number of reasons why this is difficult.

  • The extremely non-standard code structure and way of outputting results makes it awkward to use conventional tooling. See #2
  • Because getting the results from this repository is a long/slow operation, it will be prone to timeout errors, creating false negative test results. It is also potentially 'expensive' in other ways.
  • In general, triggering a test in one repository when there is a change in a different repository is not how things are typically done. It would be much more standard to have an automated test that triggers when there is a PR on this repository. For the above reasons, this hasn't been implemented yet.

So there are at least two components to this:

  • Develop the automated test for this repo
  • Determine the conditions under which that test is automatically triggered

While I understand the motive ('thorough workout to the substantive, quantitative results of the toolkit'), my honest view is that it would be ultimately wiser to:

(a) develop the REMARK standard so that it requires repositories to be less convoluted, enabling a more standardized 'substantive results testing' framework that works across REMARKS rather than forcing each to have a sui generis solution

(b) refactor the code in this repository to be less convoluted in how it depends on HARK, and

Then write the automated test for this repository in a way that isn't ad-hoc, but really much better would be to:

(c) make more substantive tests in HARK for any functionality that this repository depends on.

Trying to test HARK by verifying cstwMPC results is like trying to take out a fly with a bazooka -- it seems like it would be guaranteed to solve the problem, but is in fact grossly inefficient and a more elegant solution can be found if its given a little more careful thought.

Do Tractable

do_tractable : bool
    Whether to use an extremely simple alternate specification 
    of households' optimization problem.

TypeError: multiple values for 'reap_var' with do_mid.py

I get the following when running do_mid.py from the command line, on master.

$ ipython do_mid.py
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
File ~/projects/econ-ark/DistributionOfWealthMPC/do_mid.py:20
     18 exec(open('MainSpecPoint.py').read())
     19 os.chdir(path_to_models)
---> 20 exec(open('cstwMPC_MAIN.py').read())
     22 # Run beta-dist model
     23 os.chdir(path_to_options)

File <string>:698

File <string>:147, in __init__(self, **kwds)

File ~/projects/econ-ark/DistributionOfWealthMPC/dowmpc/lib/python3.8/site-packages/HARK/ConsumptionSaving/ConsAggShockModel.py:1912, in CobbDouglasEconomy.__init__(self, agents, tolerance, act_T, **kwds)
   1901 params["sow_vars"] = [
   1902     "MaggNow",
   1903     "AaggNow",
   (...)
   1908     "KtoLnow",
   1909 ]
   1910 params.update(kwds)
-> 1912 Market.__init__(
   1913     self,
   1914     agents=agents,
   1915     reap_vars=["aLvl", "pLvl"],
   1916     track_vars=["MaggNow", "AaggNow"],
   1917     dyn_vars=["AFunc"],
   1918     tolerance=tolerance,
   1919     act_T=act_T,
   1920     **params
   1921 )
   1922 self.update()
   1924 # Use previously hardcoded values for AFunc updating if not passed
   1925 # as part of initialization dictionary.  This is to prevent a last
   1926 # minute update to HARK before a release from having a breaking change.

TypeError: __init__() got multiple values for keyword argument 'reap_vars'

We rarely run do_mid.py. There are currently no automated tests for its results and running it is not part of the REMARK release procedure. So I'm not sure when this error was introduced.

But it looks like it has something to do with the HARK 0.13.0 update and the ConsAggShockModel

do_mid.py not working

Beginning an estimation with the specification name PYUnifbetaPointAggNetWorth...
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
File ~\GitHub\alanlujan91\DistributionOfWealthMPC\do_mid.py:18
     15 point_options = basic_options.copy()
     16 point_options.update(all_options["MainSpecPoint"])
---> 18 estimate(point_options, parameters)
     20 # Run beta-dist model
     22 dist_options = basic_options.copy()

File ~\GitHub\alanlujan91\DistributionOfWealthMPC\code\estimation.py:520, in estimate(options, params)
    516 else:
    517     # Run the param-point estimation only
    519     t_start = time()
--> 520     center_estimate = root_scalar(
    521         get_ky_ratio_difference,
    522         args=(
    523             0.0,
    524             economy,
    525             options["param_name"],
    526             param_count,
    527             options["dist_type"],
    528         ),
    529         method="brenth",
    530         bracket=param_range,
    531         xtol=1e-6,
    532     ).root
    533     spread_estimate = 0.0
    534     t_end = time()

File ~\mambaforge\envs\cstwmpc\lib\site-packages\scipy\optimize\_root_scalar.py:279, in root_scalar(f, args, method, bracket, fprime, fprime2, x0, x1, xtol, rtol, maxiter, options)
    277 a, b = bracket[:2]
    278 try:
--> 279     r, sol = methodc(f, a, b, args=args, **kwargs)
    280 except ValueError as e:
    281     # gh-17622 fixed some bugs in low-level solvers by raising an error
    282     # (rather than returning incorrect results) when the callable
    283     # returns a NaN. It did so by wrapping the callable rather than
    284     # modifying compiled code, so the iteration count is not available.
    285     if hasattr(e, "_x"):

File ~\mambaforge\envs\cstwmpc\lib\site-packages\scipy\optimize\_zeros_py.py:913, in brenth(f, a, b, args, xtol, rtol, maxiter, full_output, disp)
    911     raise ValueError(f"rtol too small ({rtol:g} < {_rtol:g})")
    912 f = _wrap_nan_raise(f)
--> 913 r = _zeros._brenth(f, a, b, xtol, rtol, maxiter, args, full_output, disp)
    914 return results_c(full_output, r)

File ~\mambaforge\envs\cstwmpc\lib\site-packages\scipy\optimize\_zeros_py.py:90, in _wrap_nan_raise.<locals>.f_raise(x, *args)
     89 def f_raise(x, *args):
---> 90     fx = f(x, *args)
     91     f_raise._function_calls += 1
     92     if np.isnan(fx):

File ~\GitHub\alanlujan91\DistributionOfWealthMPC\code\estimation.py:82, in get_ky_ratio_difference(center, spread, economy, param_name, param_count, dist_type)
     80 # Distribute parameters
     81 economy.distribute_params(param_name, param_count, center, spread, dist_type)
---> 82 economy.solve()
     83 diff = economy.calc_KY_ratio_difference()
     84 print(f"get_KY_ratio_difference tried center = {center} and got {diff}")

File ~\GitHub\alanlujan91\DistributionOfWealthMPC\code\agents.py:228, in CstwMPCMarket.solve(self)
    226     for agent in self.agents:
    227         agent.get_economy_data(self)
--> 228     Market.solve(self)
    229 else:
    230     self.solve_agents()

File ~\mambaforge\envs\cstwmpc\lib\site-packages\HARK\core.py:1433, in Market.solve(self)
   1431 while go:  # Loop until the dynamic process converges or we hit the loop cap
   1432     self.solve_agents()  # Solve each AgentType's micro problem
-> 1433     self.make_history()  # "Run" the model while tracking aggregate variables
   1434     new_dynamics = self.update_dynamics()  # Find a new aggregate dynamic rule
   1436     # Check to see if the dynamic rule has converged (if this is not the first loop)

File ~\mambaforge\envs\cstwmpc\lib\site-packages\HARK\core.py:1604, in Market.make_history(self)
   1602 self.cultivate()  # Agents take action
   1603 self.reap()  # Collect individual data from agents
-> 1604 self.mill()  # Process individual data into aggregate data
   1605 self.store()

File ~\mambaforge\envs\cstwmpc\lib\site-packages\HARK\core.py:1513, in Market.mill(self)
   1510 mill_dict.update(self.const_vars)
   1512 # Run the mill_rule and store its output in self
-> 1513 product = self.mill_rule(**mill_dict)
   1515 for i, sow_var in enumerate(self.sow_state):
   1516     self.sow_state[sow_var] = product[i]

File ~\GitHub\alanlujan91\DistributionOfWealthMPC\code\agents.py:276, in CstwMPCMarket.mill_rule(self, aLvl, pLvl, MPCnow, TranShk, EmpNow, WeightFac, t_age)
    272 def mill_rule(self, aLvl, pLvl, MPCnow, TranShk, EmpNow, WeightFac, t_age):
    273     """
    274     The mill_rule for this class simply calls the method calc_stats.
    275     """
--> 276     self.calc_stats(
    277         aLvl,
    278         pLvl,
    279         MPCnow,
    280         TranShk,
    281         EmpNow,
    282         WeightFac,
    283         t_age,
    284         self.parameters["LorenzBool"],
    285         self.parameters["ManyStatsBool"],
    286     )
    288     if self.AggShockBool:
    289         return self.calc_R_and_W(aLvl, pLvl)

File ~\GitHub\alanlujan91\DistributionOfWealthMPC\code\agents.py:339, in CstwMPCMarket.calc_stats(self, aLvlNow, pLvlNow, MPCnow, TranShkNow, EmpNow, WeightFac, t_age, LorenzBool, ManyStatsBool)
    337 aLvl = np.hstack(aLvlNow)
    338 pLvl = np.hstack(pLvlNow)
--> 339 CohortWeight = np.hstack(WeightFac)
    340 age = np.hstack(t_age)
    341 TranShk = np.hstack(TranShkNow)

File ~\mambaforge\envs\cstwmpc\lib\site-packages\numpy\core\shape_base.py:359, in hstack(tup, dtype, casting)
    357     return _nx.concatenate(arrs, 0, dtype=dtype, casting=casting)
    358 else:
--> 359     return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting)

ValueError: need at least one array to concatenate

ModuleNotFoundError: No module named 'IPython'

On master in a fresh virtual environment, first installing requirements:

$ pip install -r requirements.txt

Then trying to do_min....

$ python do_min.py 
Traceback (most recent call last):
  File "do_min.py", line 21, in <module>
    exec(open('cstwMPC_MAIN.py').read())
  File "<string>", line 53, in <module>
ModuleNotFoundError: No module named 'IPython'

Find beta vs KY

find_beta_vs_KY : bool
    Whether to computes K/Y ratio for a wide range of beta; should have 
    do_param_dist = False and param_name = 'DiscFac'.
    Currently inoperative.

Refactor execution code

The current code uses a somewhat convoluted method of setting global variables before running the main() method of the cstwMPC_main.py` script.

It would be worth looking at this to try to encapsulate it a little more. It would, among other things, make error messages from within the main code more legible.

Parameter distribution across the agent population

A major feature of this REMARK is the realistic population model.
This involves a lot of hand-coding of elementary data structures in a way that is perhaps hard to understand and adapt.

https://github.com/econ-ark/DistributionOfWealthMPC/blob/master/Code/SetupParamsCSTW.py

Refactoring this into more generalizable components would be a win.

An example of an alternative way to define populations is the AgentPopulation class that Alan developed for SHARKFin:

https://github.com/sbenthall/SHARKFin/blob/master/sharkfin/population.py

My recommendation is that we use refactoring of the DoW population code as a vehicle for improving HARK's support for defining populations. Some version of AgentPopulation could be merged into HARK, which would improve downstream projects.

It's not immediately obvious how the SHARKFin AgentPopulation could be integrated into DoW, however, as the former is mainly about defining a population whose parameters range over a given mathematical distribution. DoW may alternatively drive the development of improvements to the HARK API for lifecycle models.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.