Pseudo-code with math

Hi Robert,

I wonder if there is a written document that outlines what the vfitoolkit is mathematically doing when solving the finite horizon problem, like the ones described in the intro to life-cycle models. I ask this since vfitoolkit uses kronecker products etc, presumably to speed things up. I personally find reading matlab code can be erroneous than reading math texts.

Please let me know if there is such document and/or if you plan to write one. I think the vfitoolkit will be more user-friendly with such document. Just a thought.

Best,
Kaz

2 Likes

Good idea. I think setting up a pdf with psuedocodes for all the commands is a nice idea, helps make the toolkit more transparent.

Here is a pdf that gets started on this, I just put in the pseudocodes for the two basic value function iterations commands. Idea is to add pseudocodes everytime anyone requests one.

Let me know which command(s) you want to see and I will add those.

(Ideally I would just write up a pseudocode for every command, but realistically this would take a huge amount of time that I think is better spent adding features. So I think the ‘add pseudocodes on request’ is a suitable compromise. Give it a year or three and it will probably expand to cover most of the main features.)

This is however unlikely to help you understanding things like ‘Kron’. That is typically about how I reshape problems, and then take advantage of that reshaping to do gpu parallelization. In principle it is all very basic (think carefully about what shape the matrix is, and how to do some clever indexing), in practice it just takes a lot of care when coding. Feel free to ask specific questions here about “what do lines 10-15 of file X do” (preferably include a direct link to that command on github in your post so I can jump straight to what you are asking about).

2 Likes

Nice document! Would it be possible to have the presudo-code also for the option “Refinement” in the case “infinite horizon with d variable and markov shock”? Thanks :slight_smile:
Moreover, is there any reason for not doing the refinement? What about a finite-horizon model?

Note: for those who wonder what the refinement option means, see this older post Pure discretization with refinement - #2 by robertdkirkby

In finite horizon it is not obvious if refinement would be faster or slower. Would have to code both and find out (I might give this a try).

In infinite-horizon refinement works well because you are going to do the same return function at a lot of iterations, so you do the refinement once and then benefit from it many times. In finite-horizon you will anyway only use it once, so is just making a minor change in the order of operations which is the kind of thing where you just have to code the two different ways and find out which is fastest (I did a lot of this in the early days of the toolkit, coding things three different ways and then setting the default to whichever worked fastest).

In infinite-horizon, I cannot think of any reason for not doing the refinement. Trivally it will give the identical answer. (Obviously if you had some non-basic case, like portfolio-choice, then the refinement won’t work as the decision (d) variable is needed for computing the expection-of-next-period term. But in the basic case where next period endogenous states (aprime) is chosen directly there is no reason not to use the refinement.)

I added the psuedocode to the pdf.

1 Like

Hi Robert,

Apologies for late reply and thank you for the document! Really useful, and glad others find it the same.

Best,
Kaz

2 Likes

Hi Robert,
Working with the semi-exogenous functions of the toolkit, I realized that it would be nice to have the pseudo-code for this case added to the document (vfi and distrib), whenever you have some time.
Thanks!

1 Like

Added semi-exogenous shocks

2 Likes

Hi Robert,

Would it be possible to add the following when you have time?

  • A portfolio-choice model with EZ preferences in consumption units, with and without a bequest motive.

  • A portfolio-choice model with EZ preferences in consumption units, incorporating multiple markov and i.i.d. shocks. (This may not be necessary if the i.i.d. shocks can be included by simply adding vector e after vector z, which indicates Markov shocks. Is that actually the case?)

Thank you!

1 Like

Do you want me to include ‘refine’?

(I will write this in mid-late Feb and let you know once does, I am on holidays next week)

2 Likes

Sure, adding refine would be great.

It would also be helpful to have pseudo-code for the portfolio choice model with housing. However, this will be more relevant once the new portfolio choice model with housing becomes available. I hope prepering this model doesn’t cause you too much trouble or extra work.

Thanks again!

1 Like

Wishing you a carefree holidays!

1 Like

Added ‘standard asset+riskyasset’ (which is what Life-Cycle model 35 solves) to the pseudo-codes.
VFI_PseudoCodes

I gave only a very cursory treatment of how ‘refine’ works (in a model with riskyasset, essentially just said to look at infinite horizon model). If you want/need more detail, let me know.

2 Likes

Added Epstein-Zin preferences with finite-horizon (just the basic setup). Plan to stop here for now unless someone wants to see something else.

2 Likes

I was reviewing the algorithm for infinite horizon VFI with refinement (Section 2.1.1 of the current document). I think there is a typo in the description of the Howard step: the expected value function should have arguments

g_n^{a'}(a,z)

not
g_n^{a'}(a,z)'

The idea is that the value function in the next period has arguments a’ and z’, but with Howard we set a'=g_n^{a'}(a,z) . In particular, the z in the policy function for a’ should be the current period z, not the future z. Then the future z is the second argument of the value function. I realized this while I was coding it.
Please let me know if I miss something :slight_smile:

UPDATE
Same mistake (if I am correct) is present also in Section 2.1

2 Likes

Fixed. Thanks for spotting!!

1 Like

I had a look at the explanation of “experience asset” and I think it could be improved for clarity by providing more details. I write a few suggestions below.

  • Calculate E[V_{j+1}(a',z')]
  • Evaluate aprime(d, a) on the grids for d and a
  • Interpolate aprime(d, a) onto the grid for a; get aprimei(d, a) and aProbs(d, a), the index for the lower grid point and the probability of the lower grid point. Here we have to add an explanation on how to do this.

Switch E[Vj+1(a′, z′)] to E[Vj+1(d, a, z′)] using aprimei(d, a) and aProbs(d, a) to replace a′ with d and a.

The sentence above is not very clear, in my view. I would explain it like this:

E[V_{j+1}(a^\prime, z^\prime)] = \sum_{z^\prime} V_{j+1}(a^\prime, z^\prime) \pi(z,z^\prime)
But

\begin{align} V_{j+1}(a^\prime, z^\prime) = aProbs(d,a) V_{j+1}(aprimei(d, a), z^\prime) \\+ (1-aProbs(d,a)) V_{j+1}(aprimei(d, a)+1, z^\prime) \end{align}

In general I would try to follow the code a bit more closely

1 Like

Rather than modify the pseudocode itself I added some further text below it about how the linear interpolation is performed (as it is shared across experienceasset, riskyasset, and more).

I feel like it is possible to understand the concept without the details of how to actually do the interpolation, it also means the pseudocode makes clear that any kind of interpolation could be used here, and that the fact the toolkit does linear interpolation is just a (moderately informed) choice.

1 Like

Thanks, the new version looks very clear!

I was reviewing Section 2.2.1 “Finite Horizon: semi-exogenous state” and there is a typo/error when you write

and keep the argmax g_j(a, z).

This should be g_j(a,semiz,z).

Moreover, in the second part, the policy function for a' conditional on d is defined as …and keep the argmax g_j(a, z|d). but it should be g_j(a,semiz,z|d).

I think that in this section you want to keep semiz and z as two separate state variables, and therefore they should always be included in the state space, for value and policy functions.

1 Like

You are correct. Fixed. Thanks!

1 Like