Wow that’s great. One small comment: I noticed in some examples without interpolation that Howards greedy was more memory-intensive than Howards iterations (with or without sparse matrix). Worth keeping that in mind.
Look forward to interpolation for models with two “a” variables. My model with entrepreneurs has n_a=[750,3] and n_z=50. The run time is ok but with n_a(1)=750 I’m maxing out the RAM of my gpu. So with interpolation I could reduce a bit the number of grid points and maybe get even more accurate results
I think, but am not certain, that Howards iteration and Howards greedy sometimes imply different choices when indifferent between two choices. (This is not a problem per se, if you are indifferent, does not matter which you choose, just unusual.)
This should work now. Does the grid interpolation layer for the first of the two. Currently just everything you need for stationary eqm (value fn, agent dist, and function evaluation), hoping to do the transition paths later this week.
There is an error message when I run my test on a simple model of entrepreneurs with 2 a variables (a1= assets, a2=dummy for entrepreneur), and 1 z variable (no d variable).
Error using gpuArray/reshape
Number of elements must not change. Use [] as one of the size inputs to automatically calculate the
appropriate size for that dimension.
Error in PolicyInd2Val_Case1 (line 187)
Policy=reshape(Policy,[l_aprime,N_a*N_z]);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in EvalFnOnAgentDist_ValuesOnGrid_Case1 (line 66)
PolicyValues=PolicyInd2Val_Case1(Policy,n_d,n_a,n_z,gpuArray(d_grid),gpuArray(a_grid),simoptions,1);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in main (line 175)
ValuesOnGrid=EvalFnOnAgentDist_ValuesOnGrid_Case1(Policy, FnsToEvaluate, Params, [], n_d, n_a, n_z, d_grid, a_grid, z_grid, [], simoptions);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you need to see my code, please let me know (not sure this is on my github)
Error using gpuArray/reshape
Number of elements must not change. Use [] as one of the size inputs to automatically calculate the
appropriate size for that dimension.
Error in PolicyInd2Val_Case1 (line 187)
Policy=reshape(Policy,[l_aprime,N_a*N_z]);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in EvalFnOnAgentDist_ValuesOnGrid_Case1 (line 64)
PolicyValues=PolicyInd2Val_Case1(Policy,n_d,n_a,n_z,gpuArray(d_grid),gpuArray(a_grid),simoptions,1);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in main (line 173)
ValuesOnGrid=EvalFnOnAgentDist_ValuesOnGrid_Case1(Policy, FnsToEvaluate, Params, [], n_d, n_a, n_z, d_grid, a_grid, z_grid, [], simoptions);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Thanks! I will change the inputs of the function EvalFnOnAgentDist_ValuesOnGrid_Case1 and report back.
I looked at this function on the toolkit repo and I noticed that the stationary distribution is the last input argument. May I ask why we this function needs the stationary distribution? The purpose is to evaluate the variables defined in “functions to evaluate” on the grid so obviously one needs the grids where the state and control variables belong to, the policy functions, the parameters and the functions to evaluate, plus a bunch of options.
Inputs after ‘simoptions’ or ‘vfoptions’ tend to be optional inputs. In this case the StationaryDist is an optional input that is only needed for ValuesOnGrid if you have a model with endogenous entry (I forget exactly why it was needed but the code is there if you want to figure out exactly why I needed it ).
For one or two endogenous states, in Infinite Horizon models you can turn on both divide-and-conquer and grid interpolation layer. That said, for solving the infinite horizon value function problem, what you want is grid interpolation layer, but not using divide-and-conquer as it is not actually faster (it is marginally slower, but substantially less memory). For transition paths you want both on.
So in InfHorz models, for stationary general eqm you want to use grid interpolation layer (but not divide-and-conquer). And for transition paths you want to use both grid interpolation layer and divide-and-conquer.
I was reading your document “Grid Interpolation Layer” and I was wondering if you could add the pseudocode for the case with decision variables, preferably in infinite horizon.
I noticed that your verbal explanation does not mention refinmement, maybe because in the previous section you explained the finite horizon case, where refinement is not needed.
As far as I understand, with refinement we precompute F(d,a',a,z) and then maximize with respect to d to obtain F^*(a',a,z) and d^*(a',a,z). But we need the optimal d for all possible points on the fine grid over a'. This means that F will be a huge matrix. Even when using lowmemory, where we loop over z, we still have to store F(d,a’,a) which is big.
Thanks!
I am also trying to understand the memory requirements of my model. I have the following grid sizes:
n_d = 50
n_a = [750, 3]
n_z = 50
If I use interpolation, I can decrease n_a(1) to 300 points or similar, I guess.
According to them you should start with standard VFI iterations (because the standard VFI has global convergence, while greedy-Howards is faster but only locally convergent) until you reach a certain distance between E[V_m] and E[V\_{m+1}] (their algorithm is about converging EV, rather than V, as they just need E[V] for the structural estimation). See Section 3.2 (pg 21) of John Rust’s doc. It also describes ‘Werner’s Method’ which is apparently a minor refinement of greedy-Howards that is a touch faster.