This is really nice, I think many people in household finance are working with these models and would find it very useful.
On Cocco (2005): I think without the original code used by the author, it is challenging to replicate. When some time ago we did Chen (2010) (a similar model but without risky assets and in general equilibrium) it was important to look at the Fortran code. For example, the exact spacing used in the Tauchen method was important for the results.
I have just updated the Cocco (2005) codes. Mostly was about converting annual to 5 year. There was also a minor error in keeping track of the âstock market participationâ (fixed with an edit to the ReturnFn). Housing is still a bit odd, but if I impose âextremeâ behaviour it always gives the right answer (e.g., I changed the utility fn to just being of consumption, and then solution gave housing=0 as without a direct utility from housing, the housing is strictly dominated as an asset by the risky asset)
I think this is as close to the original Cocco (2005) as it is possible to go without knowing what the housing grid should be. That is just so important to the functioning of the model that without it I donât think we are likely to get any closer to a true replication.
seems weird if this is indeed an average from simulations (especially given the transaction costs)?
The way VFI Toolkit works is to always create the âStationaryDistâ by iterating on the agent distribution (taking advantage of the Tan improvement). This is technically not âsimulationâ as it is not using ârandom number drawsâ; the âpanel dataâ commands in VFI Toolkit do simulate. The reason is that with modern hardware+software (most importantly GPUs and Tan improvement) the iteration method has a better âaccuracy-runtime frontierâ than the simulation method. That said, conceptually iteration will give the exact same thing as the average over an infinite number of simulations. [Model statistics are then just calculated directly from the agent distribution.]