Honeybee Radiation Analysis (Radiance-parameters)



Hello everyone,

I’m trying to evaluate solar radiation on the facade of a building using Honeybee Legacy. I am using the grid based analysis so that I get cumulative radiation values for each point of the grid over a year.

I tested multiple radiance parameters settings variations according to the recommended options http://radsite.lbl.gov/radiance/refer/Notes/rpict_options.html.

Generally, when I apply different settings (Min, Fast, Accur) the accuracy naturally increases along with the simulation time. Interesting thing happens, when I apply the Maximum settings. Then the simulation time drastically decreases with similar accuracy as when simulated on the “Accurate” settings.
For example in my case I’m simulating 560 points on “Accurate” settings for approximately 11 seconds and with the “Max” settings the time drops to 2 seconds. Then when I change the -aa and -ar to something else than 0 the time increases to up to 18 minutes with the same results that I was getting from the “Max” simulation.

Can someone explain to me, why is this happening?

Thank you very much, Ondrej


Hi, @Ondrej
According to my experience, RAD parameter will have a big effect on running time.For further discussion,Please upload your rhino file and grasshopper definition.


Hi, @minggangyin
I enclosed my definition, although my question is rather theoretical. How come that I am getting more precise in less time?

HB radiation study.gh (193.7 KB)


I doubt if you are getting more precise. The “max” settings are somewhat misleading when taken in literal sense. By setting ambient accuracy to zero, you are disabling the the irradiance interpolation algorithms in Radiance. Here is a test: try -aa settings of 0.5,0.2,0.1,0.05, 0.02 and 0. You should see the calculation times increase progressively till the last value of 0.
This aspect of Radiance has some parallels to adaptive subdivision methods implemented in Radiosity-based software, the idea being that more calculations are commissioned during runtime based on settings for precision and complexity of the geometry.


Hi @sarith

thank you for your feedback. I tried your test case with different -aa values along with the “Max” settings and my findings are identical. The time increases progressively, although there is only minor difference in the results which is most likely caused by the Monte Carlo.

I found these post on the same topic and it seems that for point analysis it is safe to use -aa 0 as long as you are aware of the other parameters, especially -lw and -lr. I run some test cases and when I keep -lw to zero or very close to zero, the results do not change. Although when I use value greater than 0, the reflected part of radiation is neglected. Similar thing happens for the parameter -lr. When the value is greater than 1 the results are not affected, although when used 1 or lower the reflections are not counted in again.

Do you find these findings reasonable? Thanks, Ondrej!


Hello again,

Here is a little overview of a sensitivity analysis that we made, concerning the Radiance parameters for radiation analysis.
For the “Truth case” were used following parameters:
pt = 0.05, ds = 0.05, aa = 0.05, pj = 0.9, dt = 0.15 ,ad = 4096, dj = 0.7, dp = 512 ,lw = 0.005, xScale = 6, yScale = 6, ar = 481, as = 1024, dc = 0.75, av = 0, lr = 8, ps = 2, st = 0.15, sj = 1, dr = 3, ab = 8

The test model was uploaded before. The reflections values used for the ground were 0.1 and 0.2 for the context geometry.

First graph shows different variations of the ambient accuracy value starting with 0, which is resulting in a great error yet minimal simulation time, which is caused by disabling of the irradiance interpolation. The precision and simulation time increases rapidly when used values close to zero and decreases with greater values for -aa.

The next graph shows the impact of the “limit weight” parameter (-lw) while using the Truth case settings with -aa set to 0. From the results it is clear, that by setting the value of -lw to zero, we are able to get rid of the error caused by disabling the irradiance interpolation demonstrated in the first graph.

Looking closely at the results, we discovered that when setting the -aa = 0 and -lw to more than approximately 0.00005 the reflected part of radiation will be counted in only partially or not at all which will result in a significant error.

Has anyone experienced similar behavior? Furthermore, is there anybody who happens to know similar sensitivity analysis that could help us to prove our point? Thanks!



Hi @Ondrej, can you share the images (or the Radiance model) that you are generating? I dont have acess to a Rhino/gh installation at present.

If you set -aa to 0, as Greg mentioned in his post, you switch to pure Monte Carlo simulation. Then the ambient parameters, in the case of mostly lambertian geometery, are controlled through -ab, -ad and -lw. In rcontrib and rfluxmtx, ambient caching is turned off by default and the simulations are pure Monte-Carlo.
The value of -lw, at least in the case of rcontrib/rfluxmtx needs to be set to 1/ad. I guess for -aa 0 runs in rpict the same setting shoud apply. A year or so ago, I had a parametric run for those parameters while working on a tutorial for Radiance. Mostapha had created a viewer for those results: https://www.ladybug.tools/radiance/image-parameters