I am trying to solve a model similar to Hopenayn and Rogerson with the toolkit. All good but how can I compute the serial correlation or persistence of employment? Obviously this is defined as corr[n(t),n(t-1)] but do I compute this with the toolkit?
Hi @javier_fernan,
I donât know if there is a toolkit command to do this (Robert may answer your specific question), but let me suggest you a more general approach.
Suppose you want to calculate the covariance between two variables, k and n, which are capital and labor demand by entrepreneurs. The state variables are (a,z). The functions for k and n are:
FnsToEvaluate.k = @(aprime,a,z,par) f_k(aprime,a,z,par);
FnsToEvaluate.n = @(aprime,a,z,par) f_n(aprime,a,z,par);
First, you compute the values of k and n on the state space grid:
ValuesOnGrid=EvalFnOnAgentDist_ValuesOnGrid_Case1(Policy,FnsToEvaluate, Params, [], n_d, n_a, n_z, d_grid, a_grid, z_grid, [], simoptions);
I like to define new variables for k and n on the grid:
polk = ValuesOnGrid.k;
poln = ValuesOnGrid.n;
Recall that somewhere the toolkit has also computed the stationary distribution, called StatDist
.
Now you have everything to compute the covariance. What you want in particular is the weighted covariance and you can ask chatgpt to generate a simple Matlab function for it.
If you type âMatlab question. Please generate a function to compute the covariance between two vectors x and y with weights wâ then chatgpt gives you:
function cov_xy = weighted_covariance(x, y, w)
%WEIGHTED_COVARIANCE Computes the weighted covariance between two vectors
% cov_xy = weighted_covariance(x, y, w) returns the weighted covariance
% of vectors x and y using weights w.
%
% Inputs:
% x - vector of observations
% y - vector of observations
% w - vector of weights (non-negative, not necessarily normalized)
%
% Output:
% cov_xy - weighted covariance between x and y
% Ensure column vectors
x = x(:);
y = y(:);
w = w(:);
% Normalize weights
w = w / sum(w);
% Compute weighted means
mean_x = sum(w .* x);
mean_y = sum(w .* y);
% Compute weighted covariance
cov_xy = sum(w .* (x - mean_x) .* (y - mean_y));
end
You are almost done. Bear in mind that polk
, poln
and StatDist
have size [N_a,N_z]
, while the function that chatgpt came up with accepts vectors only. So you have to vectorize the inputs first:
cov_kn = weighted_covariance(polk(:), poln(:), StatDist(:) );
Hope this is useful!
There is not (yet) a command in the toolkit to compute autocorrelations.
There are two ways to manually compute them, both leverage FnsToEvaluate. One which Alessandro just described is based on âValuesOnGridâ together with the âweightsâ from StationaryDist (and because it is an autocorrelation would also require the transition probabilities from combining Policy with pi_z). The other is to use the âSimPanelDataâ commands to create some panel data, and then calculate the autocorrelations from this like you would any other panel data.
Commands that do covariances and that do autocorrelations are both on my wish list for features to implement. Is just a matter of finding the time.
PS. @aledinola nice use of the AI to generate a simple command rather than typing it out manually.
This is very instructive, thanks! One question: do I have to reshape polk poln and stadist before outside of the function weighted_covariance? the ( is done also inside.
Another question: what if instead of cov(x,y) I want cov(x,xâ)?
x(:)
is going to reshape x into a column vector (so it is being done inside the fn)
If you want the autocorrelation, you need to either include the transition probabilities (that fn wonât be capable of it, because the creation of ValuesOnGrid âdropsâ that info) or use the panel data.
You are right: there is no need to vectorize the three inputs of weighted_covariance, since this is done internally.
Regarding cov(x,xâ): this is basically the autocorrelation you were asking before. Now I see that my example based on cov(x,y) is not the best one. You want the correlation b/w employment at t and employment at t+1. Mathematically, this is E[n(a',z') n(a,z)] where employment n is defined on the state space a,z. (The details depend on the model you are using of course)). Now, the transition from a to a' is given by the policy function a'=g_{a'}(a,z), whereas the transition from z to z' is given by the exogenous probability \pi(z,z').
You should be able to write a function that takes as inputs n(a,z) (obtained with ValuesOnGrid
), the policy function g_{a'}(a,z) (from Policy
), the stationary distribution statdist(a,z) and the exogenous transition \pi(z,z') and returns the covariance/correlation as output. Here is an attempt:
function res = serial_corr(pol_n,pol_aprime,pi_z,statdist)
res = 0;
for z_c=1:n_z
for a_c=1:n_a
aprime_c = pol_aprime(a_c,z_c);
for zp_c=1:n_z
res=res+pol_n(a_c,z_c)*pol_n(aprime_c,zp_c)*pi_z(z_c,zp_c)*mu(a_c,z_c);
end
end
end
end !end function
I may have forgotten some normalizations, but I hope you get the idea. As an alternative, and maybe to check your results, you can simulate a panel dataset as Robert suggested.
Thanks, this helps a lot! I think I have to divide this result by the product of the means, but thatâs trivial.
Is it possible to vectorize this code?
Glad to help. Not sure how to vectorize this code, but as long as you are not running it too often I would not even bother. Usually I try to vectorize the nested loops that are invoked many times, e.g. ones in the value function iteration or distribution iteration.
Regardless, it might be a good question for chatgpt or claude AI
Wrote code to do this autocorrelation calculation for a discrete markov. At some point I will rewrite it in a toolkit command.
Put it in a new topic, just so I can give it a title that is easy to find.