I can across some slides by Felix Wellschmied that give a nice idea of how VFI Toolkit compares to writing your own code.
The following is his slide reporting code runtimes. The exercise is roughly solving the household problem from Aiyagari model, but with insanely large grids on assets:
If you write your own code naively then VFI Toolkit will be faster. But if you write ‘smart’ code that uses algorithms which take advantage of the properties of your model, your code will be faster. This very nicely sums up my own thoughts on the matter.
Obviously there are other aspects: the time it takes to write the code (your own time is more valuable than the computers time), the complexity of the code (more complex, more room for making typos/errors), the size of the model (large models probably will just fail to solve without smart codes), learning to write the code, the ease with which other people can read and understand your code, etc.
Note that what you want to do with the model also matters. If you are going to solve it a few hundred times, 3secs will be easy fast enough and writing smart code for 0.04secs is not worthwhile. If you are going to solve it a few million times, 3secs is way too slow and you better write smart code good enough to take just 0.04secs.
Things like using EGM (endogenous grid method) where possible, and doing divide-and-conquer to exploit montonicity (Gordon & Qiu, 2018). There is a very long list of different algorithms for different problems. I’m not aware of anything that lists heaps of these, and in practice you kind of are stuck reading the appendices of various papers that solve models like what you want to solve to see what they do.
I don’t think Felix’s codes underlying these slides is available (it is from his teaching notes, specifically ‘numerical methods’ from his graduate Macro course; the codes don’t appear there, and I don’t think he has a github repo).
PS. VFI Toolkit can now do a form of divide-and-conquer to exploit monotonicity, but it couldn’t whenever it was that Felix wrote these slides (I don’t know when he wrote these, but divide-and-conquer is new this year in VFI Toolkit).
Very interesting slides. As far as I understand, “smart code” makes use of two key properties of the income fluctuation problem (the partial equilibrium version of Aiyagari that Felix is using as a test case):
Monotonicity of policy function
Concavity of the right-hand side of the Bellman equation F(a',a,z)+beta*EV(a',z)
In my experience property (1) applies very often (almost always?) so it’s good to use it but property (2) does not, at least beyond textbook models. If you have a model with discrete choices (e.g. occupational choice, retirement choice, etc.), the objective function is not concave. This is why discretized value function iteration, perhaps with linear interpolation, is a great method
There are also extensions of the EGM to deal with non-convexities, see e.g. this or this, but they are not straightforward to code and they are valid only if certain restrictions are met.