Problem in Infinite horizon refinement

When running the code for a model with infinite horizon and refinement, I got this error:

vfoptions = 

  struct with fields:

                        verbose: 1
                      tolerance: 1.0000e-04
                        maxiter: 1000
                        howards: 80
                     maxhowards: 500
                      lowmemory: 0
               divideandconquer: 0
              separableReturnFn: 0
                     solnmethod: 'purediscretization_refinement'
                gridinterplayer: 0
                       parallel: 2
                 endogenousexit: 0
                       endotype: 0
                incrementaltype: 0
                experienceasset: 0
                    polindorval: 1
        policy_forceintegertype: 0
    piz_strictonrowsaddingtoone: 0
                     outputkron: 0
                alreadygridvals: 0
                       actualV0: 0

Unrecognized field name "returnmatrix".

Error in ValueFnIter_Case1_Refine (line 17)
    if vfoptions.returnmatrix==0     % On CPU
       ^^^^^^^^^^^^^^^^^^^^^^
Error in ValueFnIter_Case1 (line 534)
    [VKron,Policy]=ValueFnIter_Case1_Refine(V0,n_d,n_a,n_z,d_grid,a_grid,z_grid,pi_z,ReturnFn,ReturnFnParamsVec,DiscountFactorParamsVec,vfoptions);
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in solve_toolkit (line 154)
[V,Policy]=ValueFnIter_Case1(n_d,n_a,n_z,d_grid,a_grid,z_grid,pi_z,ReturnFn,...
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Oops, sorry. Should be fixed now.

[I am reorganizing the infinite horizon codes internally. Renaming, reorganizing, and simplifying a bit before I then go and complicate them by adding in the grid interpolation. Accidentally included something in a push that I hadn’t intended to push yet, have now just gone and made this work everywhere seeing as it is already in there. As part of this I am eliminating vfoptions.returnmatrix, which is not used for anything nowadays, it was part of Version 0.0 when in the first version of the toolkit you had to pass the return function as a matrix rather than a function, nowadays you pass the function and it is used internally to create the matrix. No longer any real reason you would want to pass the matrix and so I am eliminating the option as it cleans everything up internally. Plus should cut runtimes by about 1 millionth of a second :wink: ]

1 Like

Nice! And even nicer that we will have interpolation for infinite horizon models :slight_smile:

By the way, how would you do interpolation with refinement?

In the first step, you compute the optimal d choice conditional on a', call it d(a',a,z). I guess a' here should be the coarse grid, not the finer grid, otherwise it goes out of memory (I have memory problem in my model with entrepreneurs).

In the second step, the optimal a' does not belong on the grid. While you interpolate the expected value function EV(a',z), how do you evaluate d(a',a,z)? I tried to interpolate also d but the results are strange. So it works without refinement, but it is slower.

Without refinement it is like in the finite horizon case with d variable, the only difference is Howard. By the way, for Howard one could use the approach of Pontus Rendhal: he does a vectorized ā€œlottery methodā€ using sparse matrices.

So far I tried three variants for grid interpolation layer in InfHorz, just working with Aiyagari model (so no decision variables).

i) precompute the whole ReturnMatrix based on the ā€˜fine’ aprime_grid, then just use this as normal (still need interpolation for next period expectations).
ii) precompute the whole ReturnMatrix based on the ā€˜fine’ aprime_grid then, first solve just using the original a_grid for next period, and once nearly converged, switch to the fine grid (so do (i) from here on). This is essentially a form of multi-grid method.
iii) precompute only the standard ReturnMatrix using a_grid for next period values, first solve just using the original a_Grid for next period, and once nearly converged switch to the fine grid but now having to evaluate ReturnFn at the relevant interpolation points each iteration.

Clear winner is (ii) which is easily fastest. Obviously the weakness is that it is memory hungry, so (iii) still provides a worthwhile alternative, albeit an order of magnitude slower. (i) is just a crap version of (ii).

In the above linked code, GI corresponds to (iii), preGI is (ii), and pre2GI is (i).

Kind of cool, I’ve always wanted to do some multi-grid, and this actually is essentially doing that. Multi-grid is so obviously going to be faster, but normally it is just really tricky to figure out how to use it generically.


So, my thought on models with a decision variable d, is simply to implement (ii) [precompute ReturnMatrix, then do multi-grid approach], and with this the ā€˜refine’ will be easy as it is just on the precomputed ReturnMatrix over the fine grid. I might also try a version of (iii), in which case I would use refine to begin, but then when doing the interpolation I would not be able to refine anymore while interpolating.

There are two things I should then look into. One is improved versions of Howards, like in Rendahl (2022), Bakota and Kredler (2022) and Phelan and Eslami (2022). [I’ve not really read these closely enough to see if I can do any/all of them.] The other is to try again to do ā€˜Relative VFI’ or even ā€˜Endogenous VFI’ (Bray, 2019), which are essentially about a better convergence criterion. [I did relative VFI in the past, but with ā€˜discrete’ states it was hard to pick a convergence criterion as it ā€˜jumped’ back and forth. I suspect that since interpolation gets much closer to ā€˜continuous’ states it might resolve this.]

1 Like