I am investigating different methods for running radiation analyses, focusing primarily on speed. I have run test with a simple model of just one 10x10 m square surface with 25 test points.

Ladybug_Radiation Analysis in GH, inluding genCumulativeSkyMtx and selectSkyMtx, takes around 18 seconds on my computer (with parallel_ set to true). The radiation analysis itself runs in under a second.

honeybee.radiance.recipes.radiation.gridbased in python (without Rhino/GH) runs in about 59 seconds.

honeybee.radiance.recipes.annual.gridbased with simulation_type=1 (without Rhino/GH) runs in about 78 seconds.

I’m interested in any insights in this and in what can be done to improve speed. Can the method from Ladubug legacy be used without Rhino/GH? Can the generated skies be saved and reused (I will use a bunch of different weather files)?

I am looking for a solution without Rhino/GH that could be run in a few seconds for a simple model.

This is based on the concept of daylight coefficients. The annual result is sum of parts (i.e 8760 hours). The calculation takes more time as we simultaneously, and independently, trace rays to 145 sky patches and then follow it up with some matrix calculations.

The two methods aren’t really comparable even though you can arrive at same results as #1 from #2 (or #3). The reverse isn’t possible.

Thanks @sarith and @mostapha! I only need annual results, so genCumulativeSky seems to be the right path. Is that only implemented in Ladybug legacy right now?

note that for 3 with daylight coefficients you can use multiple cpus to run it faster.
In the commands of rfluxmtx and rcontrib you can specify the -n parameter followed by the number of cpus: rfluxmtx - n 5 …

For strictly outdoor solar radiation Ladybug legacy is really quick. For indoor solar radiation through glazed surfaces (with solar factors instead of visible transmittance) you can use 2 or 3.