I would like to review some parameter choices for default settings in VFI, ValueFnIter_Case1.
I noticed that vfoptions.maxiter=Inf; Not sure this is a good choice. With Howard improvement, usually if the VFI does not converge after, say, 100 iterations, then there is probably a bug or something wrong with the model. I realized this while doing a GE search. At some point the GE got stuck and I found that it had got stuck in the VFI. So I set the max no of iterations equal to 100 and it hit the upper bound. Luckily, the distance was about 0.0000001162 but still larger than the tolerance.
So I have two proposals:
Set the default for maxiter equal to something like 100 or 500 or 1000, but certainly not Inf!
Make the VFI display a warning (not an error) if the iteration limit is reached. This is really important to find potential bugs
if currdist>Tolerance
fprintf('Total no. of VF iter: %d \n',tempcounter-1)
fprintf('currdist: %.10f \n',currdist)
warning('VFI has not converged successfully!')
end
Thanks!
Update
I checked the code for the distribution and it is fine. There is a sensible upper limit simoptions.maxit=10^6, and a warning is displayed if convergence fails.
As I mentioned elsewhere, the bug in convergence is likely due to an error in the ReturnMatrix, specifically the value -Inf (which Howards-greedy doesn’t handle). I have a PR pending that checks for that condition and gives an error. It looks like this:
if any(~isfinite(Ftemp))
error("Howards-greedy doesn't work for non-finite return values; rerun with `vfoptions.howardsgreedy=0;`")
end
Robert also committed changes that make vfoptions.howardsgreedy==0 the default, which might also protect you from errors. If you observe a failure to converge with normal Howards improvement, it’s likely an error that should be reported so it can be fixed or diagnosed in better ways.
I also agree that setting vfoptions.maxiter=Inf is a bit aggressive. Hmm…
This is an important point, but I’m not sure I fully understand.
Let’s conside the case without a d variable: the return matrix is the 3-dimensional array F(a',a,z). In most cases, it will have lots of -Inf corresponding to combinations of (a',a,z) that violate feasibility. The Howard algorithm iterates on the Bellman equation “fixing” the optimal policy, that is,
where g is the policy function for the future endogenous state a'. Clearly, if F(g(a,z),a,z)=- \infty for some (a,z), this is a problem: it means that there are no feasible choices for that particular (a,z) in the state space. In the context of the textbook consumption-savings problem, it means that the optimal value of savings a' gives negative consumption.
Therefore, if the model is well-specified, cases where F(g(a,z),a,z)=- \infty should never happen. Please let me know if I miss something.
True. But VFI Toolkit does not require the model to be “well-specified” in this sense. Reason is that it is easier to just let the user solve the model without having to go through all the work of making the model well specified.
Some examples:
(i) you want to solve a firm model where zero assets give -Inf, is easier if you can still set up a_grid to have 0 in it, without having to only allow tiny asset values.
(ii) a model with an endogenous borrowing constraint that will be determined in eqm. You can set up a_grid=-10:1:100, and then in general eqm it turns out that the borrowing limit is -5, so the values for assets of -10 to -6 will all give -Inf (depends a bit but you get the idea).
Of course in both cases it is possible to set up the model so that it is well-specified and this won’t happen (in the first case it is trivial). But often it is just easier to solve the model than to put in the work to get it well-specified. Specifically the runtime losses from not setting it to be well-specified can be much smaller than the amount of work it would require to convert to a well-specified model.
Hence why the toolkit is designed to handle the model even if it is not well-specified and gets V=-Inf at some points in the state space.
Added warnings when ends because you reach max iterations, rather than convergence.
Added warnings to Howards-greedy when any(~isfinite(Ftemp)), where Ftemp is F(g(a,z),a,z). Only looks at the final iteration (not every iteration) so as to not hurt run times (hopefully just the final is sufficient). [same as Michael suggested, just at the end rather than in each iteration]
@aledinola speaking of defaults, the default is currently
if ~isfield(vfoptions,'howardssparse')
if N_a>1200 && N_z>100 && vfoptions.gridinterplayer==0
vfoptions.howardssparse=1; % Do Howards iteration using a sparse matrix (rather than indexing). Sparse is only faster for bigger models.
else
vfoptions.howardssparse=0;
end
end
I noticed you did some more howardssparse implementations for grid interpolation layer with postGI. Could/should this default be changed now to eliminate the && vfoptions.gridinterplayer==0 ?
This controls the maximum number of iterations during which Howards is used, after this as long as you have not yet reached vfoptions.maxiter you will continue to do standard VFI iterations just without using Howards (iteration or greedy depending on other settings).