Radiance batch files on Amazon Web Services

hi @mostapha,

I understand that it is possible to run multiple cases at the same time especially for annual glare simulations the way @RaniaLabib did very well (thanks). I now wonder how a single case of grid-based (annual) written in HB[+] could be decomposed to be dispatched accross multiple cores.

So far, I was brutally splitting the test points into 4 sets to run on 4 processors and writing 4 case folders.
For this to work in parallel through the cmd line I needed to make 4 copies of the Radiance folder and edit the command.bat file to set the corresponding PATH. It worked… but no doubt this is the most brutal way of doing it.

With regards to more sophisticated procedures:

Is honeybee-docker-daylight the solution to this? how does it address the decomposition of a case?
Is it correct to say that docker wouldn’t divide the case but rather issue a single job named “Json payload” which docker divides into small pieces before dispatching into the cpu power.

I can’t stop thinking of the butterfly commands “decomposePar” + “mpirun -np 4 simpleFoam - parallel” + “reconstructPar”. How similarly applicable is this in Hb [+] considering the different matrix calculations?