Some comments/thoughts on EGM:
Solving a value fn iteration problem, like
V(a,z)=\max_{a'} F(a',a,z) + \beta E[V(a',z')|z]
involves three steps: (i) evaluate expectations (calculate E[V(a',z')|z] from V(a',z')), (ii) solve maximization (for some points in the space (a,z)), (iii) fit the value fn (create V(a,z) on LHS based on what you got solving the maximization).
By far the most computationally costly of the three is (ii), solving the maximization. The Endogenous Grid Method (EGM) introduced by Chris Carroll in a 2006 article was the brilliant insight that for some models, we can replace (ii) with a function inverse. Function inverse is computationally orders of magnitude easier than maximization, and so this massively slashes runtimes. Carroll’s original observation was in a model with a finite time horizon, with one endogneous state and one exogenous state. This has then been generalized to cases with endogenous labor supply, infinite horizon, and much more. Iskhakov, Jorgensen, Rust and Schjerning (2017) show a nice clean way to include discrete decision variables. There are a few papers showing how to extend EGM to two endogenous states, but here it gets hard.
In short, whenever EGM can be used it will be the best algorithm (best=best runtime-accuracy frontier). The main weaknesses of EGM are two-fold: it cannot be applied to all models, and it is difficult (but can be done) with more than one endogenous state.
Why not use EGM in VFI Toolkit? Essentially just that I don’t have time to code both the discertization methods used and code EGM for the models where it can be used. I like the discretization because I can have one code to solve ‘everything’. EGM cannot solve everything the toolkit can solve. But anywhere EGM can solve a model that the toolkit solves, the code would be faster if it was written in EGM.
PS. The step (iii) of fitting the value function is not obvious in VFI Toolkit because it is trivial when using discretization. But it is obvious if you do something like using cubic-splines or chebyshev polynomials to parametrize/approximate the value fn and/or policy fn (or even the expecations term). Deep-learning is another different way to do this step.