New example based on toolkit: Aiyagari with taxes

For those who want to become familiar with Bewley models and taxation, I’ve created an example based on the Aiyagari model.
There is already an example based on basic Aiyagari on the toolkit github: see VFItoolkit-matlab-examples/HeterogeneousAgentModels at master · vfitoolkit/VFItoolkit-matlab-examples · GitHub
My example builds on this and introduces progressive income taxation, following Heathcote, Storesletten and Violante. The idea is that the government taxes income from labor and capital in a progressive way, according to the following tax function:

T(y) = y-lambda*y^(1-tau)

where y denotes taxable income. Parameter lambda governs the level of taxation and tau>=0 the progressivity. For example, if tau=0, then taxes are proportional to income, i.e. T(y)=(1-lambda)*y. If instead 0<tau<1, taxation is progressive: the average tax rate, T(y)/y, increases with income y.

This example shows also how to compute several distributional statistics using the EvalFnOnAgentDist_AllStats_Case1 command. I report below the results of this simple exercise.

@robertdkirkby: While playing around with this example, I’ve realized that sometimes the general equilibrium condition is not enforced very strictly. In this example,

Current GeneralEqmEqns: 
	CapitalMarket:  -0.0040 
       0.00393277020340754

which may or may not be enough (please note that I am using 1000 grid point for assets and 7 points for the Markov shock z). Perhaps it might be useful to add an option to pass to fminsearch here:

elseif heteroagentoptions.fminalgo==1
    %options = optimset('Tolfun',1e-6,'TolX',1e-6);
    [p_eqm_vec,GeneralEqmConditions]=fminsearch(GeneralEqmConditionsFnOpt,p0);

Currently, the toolkit uses the default tolerance values for fminsearch which are TolFun=1e-4 and TolX=1e-4.
Lastly, but this is probably a bug of my code, I check whether the market clearing for goods is satisfied but it is not:

% Aggregate moments

agg.KK = AllStats.K.Mean;

agg.LL = AllStats.L.Mean;

agg.CC = AllStats.cons.Mean;

agg.YY = (agg.KK^par.alpha)*(agg.LL^(1-par.alpha));

agg.II = par.delta*agg.KK;

agg.TT = AllStats.taxes.Mean;

resid_walras = abs(agg.CC+agg.II+agg.TT-agg.YY);

The variable resid_walras is very different from zero. I find this odd, since in equilibrium it must be that
C+delta*K+G=Y, where G=total taxes.

==================================================================
PARAMETERS
beta  : 0.960000 
crra  : 2.000000 
alpha : 0.360000 
delta : 0.080000 
rho_z : 0.600000 
sig_z : 0.160000 
lambda : 0.900000 
tau    : 0.100000 
------------------------------------
AGGREGATE QUANTITIES AND PRICES
K/Y        : 1.088563 
Int. rate  : 0.052099 
Wage       : 1.124848 
C/Y        : 0.995379 
I/Y        : 0.087085 
res GE     : 0.003933 
res_walras : 0.231407 
------------------------------------
CV
CV(Earnings): 0.201220 
CV(Income)  : 0.203852 
CV(Consum)  : 0.096666 
CV(Wealth)  : 0.821278 
------------------------------------
GINI
Gini(Earnings): 0.108654 
Gini(Income)  : 0.113023 
Gini(Consum)  : 0.051742 
Gini(Wealth)  : 0.442639 
------------------------------------

I’ve found another problem with this example (that may reflect a bug in the toolkit code?)

If you set the no tax case at the beginning, i.e.

% No taxes
par.lam_hsv = 1;
par.tau_hsv = 0;

the code for the stationary distribution returns with the following error message:

Error using fprintf
Function is not defined for sparse inputs.

Error in StationaryDist_Case1_IterationTan_raw (line 55)
            fprintf('StationaryDist_Case1: after %i iterations the current distance ratio is %8.6f (currdist/tolerance, convergence when reaches 1) \n', counter, currdist/simoptions.tolerance)

Error in StationaryDist_Case1 (line 215)
        StationaryDistKron=StationaryDist_Case1_IterationTan_raw(StationaryDistKron,PolicyKron,N_d,N_a,N_z,pi_z,simoptions);

Error in main (line 82)
StatDist=StationaryDist_Case1(Policy,n_d,n_a,n_z,pi_z, simoptions);

But this must be a bug, because if I disable the Tan improvement with simoptions.tanimprovement=0; then the distribution runs fine, even though the code is much slower as expected. The GE was more or less converging; I stopped the code at this point:

Current GE prices: 
	K_to_L:   5.5388 
Current aggregate variables: 
	K:   5.6421 
	L:   1.0202 
Current GeneralEqmEqns: 
	CapitalMarket:   0.0083 

Fixed the bug (was because currdist was sparse when using Tan improvement, and cannot use sparse matrix as input to fprintf).

And you can now set the tolerance using:

heteroagentoptions.toleranceGEprices=10^(-4); % Accuracy of general eqm prices
heteroagentoptions.toleranceGEcondns=10^(-4); % Accuracy of general eqm eqns

[These show the default values.] By default, the stationary general equilbirium is computed using matlab fminsearch(), and these control what fminsearch() options call TolX and TolFun, respectively.

1 Like

Error with Walrasian market clearance of the good market is actually just that after computing the stationary eqm, you forgot to recompute V and StatDist.

PS. For rho=0.6, Rouwenhorst is going to be mediocre way to discretize AR(1). Farmer-Toda is in VFI Toolkit and will do a better job.

PPS. Last lines plot the pdf. When discretizing agent dist the pdf tends to be a bad idea to plot (as the y-axis is effectively just the inverse of the number of grid points you use). My preference is just plot cdf (as this largely won’t be affected by number of grid points), alternative is use a bandwidth-histogram plot.

1 Like

Perfect, thank you so much!

I’ve updated my example on github. I have tightened the fminsearch options to 1e-6 but the GE error did not improve. It does improve if you add more grid points, of course.

I was also checking the following

FnsToEvaluate.K = @(aprime,a,z) a; 
FnsToEvaluate.Kprime = @(aprime,a,z) aprime;
AllStats.K.Mean-AllStats.Kprime.Mean

ans =

       0.00150702215177656

If I am not mistaken, these two numbers should be very close to zero. The first is the integral of a with respect to the distribution, the second the integral of a'(a,z) with respect to the same distribution. In a stationary state the two are the same, at least theoretically.

Please let me know if am too picky :slight_smile:

This error is roughly relative magnitude of 10^(-5) (K was 80ish). Personally I wouldn’t be worried about it (if you are as you say you can always increase number of grid points; you are most welcome to have higher standards than me :smiley:). This is a way smaller error than the approximation errors that are introduced by using 7 points to approximate an AR(1), or even 15 points. Since using 7 or 15 points for an AR(1) is fairly standard in published papers, this is also a way smaller error than most published papers.

To think about ‘acceptable’ numerical error sizes I always think of “if I could eliminate all numerical error, so it is exactly zero, does this change the results in any noticable way”, going from K=80 to K=80.0015 I feel makes no noticable difference to anything I would take away from the model results [I’d be unlikely to be reporting more than one or two decimal places for it in the paper anyway, so would be invisible in my actual published results] (obviously you have to think if the numerical errors compound in some way, rather than the simple addition I did here).

1 Like