Hi Rob,
I was looking at the replication code of Huggett (1996) in Github and I have a couple of questions:
- The income tax rate tau is set to match government spending to output, i.e. G/Y. Since from the data G/Y=19.5%, it follows from the government budget constraint that tau=0.195/(1-delta*K/Y). But in the code (see Huggett1996_ReturnFn) you write
% Huggett (1996) calibrates tau to the following (see pg 478 for explanation)
tau=0.195*(1-delta*KdivY);
Is this a typo? Given that delta = 0.06 and K/Y=3 in equilibrium, we get
0.195/(1-0.063) = 0.2378
vs
0.195/(1-0.063) = 0.16
The second question is related to the market clearing conditions and the GE variables. To find the GE to iterate on the interest rate, pension benefit and accidental bequests. However it seems to me that you don’t need to include the pension benefit since it is implied by the interest rate already (for the same reason you don’t need to incude, say, the wage).
From Huggett’s paper, page 477, the social security balance condition yields
b = theta* w*L/(mass of old people)
Everything on the RHS is known: theta is a parameter, w is implied by K/L (which is implied by r), L is exogenous because labor supply is fixed in this model and the mass of old people is also exogenous. Indeed, aggregate labor L can be computed from condition 3(iii) on page 476 which requires integrating the exogenous shocks with respect to the marginal distribution of the shock.
Thanks in advance for any feedback on this!
Thanks a lot for your quick answer!
I have another question though, related to the general equilibrium condition for accidental bequests. In equilibrium it must be that the lump-sum transfer T is equal to accidental bequests, see eq. 7 on page 477 of Huggett’s paper.
In the replication code, total accidental bequests are defined as the integral of
(1-sj)*aprime_val*(1+r*(1-tau))
However, the corresponding equation in Huggett has also the term (1+n) or (1-n), not clear from the paper… This term seems to be missing in the equation in the code.
P.S. I’m planning to use the toolkit and Huggett’s paper in a course I’m gonna teach this year, this is why I’m reviewing the paper and the replication 
1 Like
Lastly, I wanted to signal that the example based on Huggett (Huggett1996_Example.m) and the replication do not work if someone is using a machine without NVIDIA graphics card. I think in that case the user has to set the Parallel flag equal to 0 or 1 (the default is 2 if I remember well). It might be useful to detect automatically if the user PC has NVIDIA card or not and set the Parallel option accordingly.
This is just a thought of course: I’m trying to anticipate the possible issues that might arise within a classroom 
1 Like
You are correct that the accidental bequests should be divided by (1+n). I have now fixed this. I do not change the FnsToEvaluate (the formula for the accidental bequests). Instead I change the general equilibrium condition to be: Beq/(1+n)-T (T is the lump-sum transfer, n is the population growth rate)
While there would be no difference between the two in the Huggett (1996) model as it looks at the stationary general equilibrium, the difference would matter if you want to calculate a general equilibrium transition path. The Beq relates to last period, while the n relates to the present period (VFI Toolkit can easily handle this by calling it Beq_tminus1, which is interpreted as Beq, but from the previous period). Seems nicer to use an approach that will work more generally even when it is not necessary in this model given that there is no difference in the amount of work required to set up the two approaches.
I am also in the process of updating the replication to version 2 of VFI Toolkit. Am just running it now and once it works cleanly I will upload the update.
If you don’t have a GPU I would probably give up on the replication code (for reasons of runtime), but the example code should still work. I might try test this on my laptop (which has no GPU) and see if all the automatic detection of (absence of) GPU stuff is working for the Huggett (1996) example codes (the value fn stuff is nice and clean nowadays, just need to see if all the smaller parts are doing the automatic detection nicely; I never personally use the cpu-only code so it doesn’t get tested as much as the rest).
1 Like
Quick comment on the options relating to parallelization:
parallel=2 corresponds to gpu parallelization
parallel=1 corresponds to parallel cpus
parallel=0 uses just a single cpu
(you are unlikely to ever want to use parallel=0, it is there in case you are doing something more advanced and want to control at which level parallelization occurs)
For simulating the agent distribution there is also
parallel=3, use sparse matrices on cpu
parallel=4, use sparse matrices on gpu
Sparse matrices are slower, but use less memory than ‘full’ matrices. You would only ever want these if you run out of memory when trying to use parallel=1 or 2 during the simulation of the agent distribution
By default, VFI Toolkit asks if your computer if it has a gpu, if yes it sets parallel=2, if no it sets parallel=1. Note that you can separately control vfoptions.parallel and simoptions.parallel.
[More about simulating agent distribution: for finite horizon models, the default is to iterate on the agent distribution, which is simoptions.iterate=1. You can set simoptions.iterate=0 to instead simulate the agents distribution, in the sense of create lots of time series simulations, and then convert these into the probability density function of agents.]
Comment: Actually, this seems like it might be useful info, I am going to give it it’s own thread 
1 Like
Huggett1996_Example can now run without a gpu. If you want to do this I very strongly recommend you use a smaller n_a (number of grid points on assets) as otherwise it will take borderline forever.
1 Like
Just to say that I just updated the codes and pdf for the replication of Huggett (1996). Corrects issues above (in this thread), and updates to use version 2 of VFI Toolkit.
1 Like
Thanks Robert! Now the results are closer to my own replication… but it seems that the graph of percentiles of wealth by age is a bit off from Huggett. I basically replicated Huggett following your code with the only change that allow for a denser grid for a’ and I use linear imterpolation. My results are in line with yours and in particular average wealth by age reaches a peak of 8, whereas in Huggett is around 12. It’s a small difference of course so I’m not worried
1 Like
Nice one!
I think these kind of differences are mostly about the fact we have an extra 25+ years of computing power so we can very easily crank up the accuracy in a way that would have been borderline impossible back then. We simply get more accurate results now.
1 Like