Errors When Running Portfolio-Choice Models

I tested the following code:

x = rand(1,3)
x([1,2]:[1,2])

In R2024a it gives a warning but it is ok in R2023b! I hate when Matlab makes the code NOT backward compatible. In Fortran you can still compile and run code written in the 70s :wink:

1 Like

Now the CGM2005 code is running smoothly. I am on the way and will continue studying your codes on Monday. Thank you.

1 Like

To solve the “colon operands must be scalars” warning in the the CGM2005 code, just replace

Params.sj.College=1-dj_temp((Params.agej.College+Params.agejshifter.College):100);

with

Params.sj.College=1-dj_temp((Params.agej.College(1)+Params.agejshifter.College):100);

Note Params.agej.College(1).
I have learnt that in Matlab the colon operator always uses the first element of each operand. For example, create a vector x = rand(1,4), then

i_start=[2,56]
i_end=[3,inf]
x(i_start:i_end)

will give the same result as

x(2:3)
2 Likes

I made the change Alessandro describes in the previous post to the CGM2005 codes. Hopefully that cleans up the “colon operands must be scalars” warnings.

2 Likes

There are no warnings anymore. Thank you.

2 Likes

I was thinking about how to parallelize over permanent types and I came across this old post :slight_smile:
Now the toolkit solves the vfi (or distribution, etc) for each type sequentially. Since iterations over types are of course independent, they would greatly benefit from parallelization. Since using multiple Gpu might not be feasible, isn’t it possible do some hybrid and parallelize with parfor over types and then parallelize on the gpu given each type? I guess the answer is not but this is a limitation of Matlab. In fortran and C you can do this. I think it’s called heterogeneous parallelism

EDIT

Actually it seems that heterog parallelism over cpu and gpu is possible even in Matlab

1 Like

Doing a parfor-loop over a parallel problem requires that your hardware can parallelize at two levels (e.g., if you have 12 cpu cores, you could have a loop that runs 3 parallel instances each of which is using 4 cores in parallel). I am not sure if Matlab is able to do this with cpu parallelization or not, I’ve never looked into it.

Nowadays the agent dist in toolkit is all on the cpu (as Tan improvement requires a reshape() of a sparse matrix that cannot be done on a gpu; matlab has sparse gpu matrices, but does not allow reshape of them). This is actually not even parallel, so I should implement parfor-loop over this for ptypes, thanks for the good idea!

If you have one GPU, this is not possible (regardless of the programming language) as there is only one level in which parallelization can occur. There are two ways out of this, either you have multiple GPUs, or in the higher-end server NVIDIA GPUs they have “Multi-Instance GPU (MIG) feature allows a single NVIDIA GPU to be partitioned into multiple GPU instances, each with its own dedicated resources”. Currently I have access to neither of these (a computer with multiple GPUs, or a GPU high-end enough to have MIG feature). But is something to keep in mind as presumably MIG will become more common.

PS. There is another thing, which is to send off/broadcast the code in ‘batches’ to different clusters, each of which might have a GPU of its own. My impression from abstract of that link is they are more interested in this issue, together with mixing hardware cpu/gpu/fpga. This is more of an HPC/supercomputing thing, people using such high-end hardware tend to be writing more boutique code.

2 Likes

On the server of my university, I have access to the following GPU resources:

  • The QoS provides access to nodes with the following GPU models:
    • NVidia A100-80, 80GB
    • NVidia A100-40, 40GB
    • NVidia A30

At least the first one supports Multi-Instance GPU (MIG). If you write some code I am happy to test it :slight_smile:

More seriously, I think I can add external researchers (i.e. not affiliated to my university) to my projects on the server. We might think about this in the future.

1 Like

Those A100s and A30s can do MIG.

Table 1 of: NVIDIA Multi-Instance GPU User Guide r560

You’d have to check if MIG is activated on them (I believe it can be toggled on/off), ask your IT peeps

2 Likes