Honeybee Radiation Analysis (Radiance-parameters)

Hello everyone,

I’m trying to evaluate solar radiation on the facade of a building using Honeybee Legacy. I am using the grid based analysis so that I get cumulative radiation values for each point of the grid over a year.

I tested multiple radiance parameters settings variations according to the recommended options http://radsite.lbl.gov/radiance/refer/Notes/rpict_options.html.

Generally, when I apply different settings (Min, Fast, Accur) the accuracy naturally increases along with the simulation time. Interesting thing happens, when I apply the Maximum settings. Then the simulation time drastically decreases with similar accuracy as when simulated on the “Accurate” settings.
For example in my case I’m simulating 560 points on “Accurate” settings for approximately 11 seconds and with the “Max” settings the time drops to 2 seconds. Then when I change the -aa and -ar to something else than 0 the time increases to up to 18 minutes with the same results that I was getting from the “Max” simulation.

Can someone explain to me, why is this happening?

Thank you very much, Ondrej

Hi, @Ondrej
According to my experience, RAD parameter will have a big effect on running time.For further discussion,Please upload your rhino file and grasshopper definition.

Hi, @minggangyin
I enclosed my definition, although my question is rather theoretical. How come that I am getting more precise in less time?
Ondrej

HB radiation study.gh (193.7 KB)

I doubt if you are getting more precise. The “max” settings are somewhat misleading when taken in literal sense. By setting ambient accuracy to zero, you are disabling the the irradiance interpolation algorithms in Radiance. Here is a test: try -aa settings of 0.5,0.2,0.1,0.05, 0.02 and 0. You should see the calculation times increase progressively till the last value of 0.
This aspect of Radiance has some parallels to adaptive subdivision methods implemented in Radiosity-based software, the idea being that more calculations are commissioned during runtime based on settings for precision and complexity of the geometry.

Hi @sarith

thank you for your feedback. I tried your test case with different -aa values along with the “Max” settings and my findings are identical. The time increases progressively, although there is only minor difference in the results which is most likely caused by the Monte Carlo.

I found these post on the same topic and it seems that for point analysis it is safe to use -aa 0 as long as you are aware of the other parameters, especially -lw and -lr. I run some test cases and when I keep -lw to zero or very close to zero, the results do not change. Although when I use value greater than 0, the reflected part of radiation is neglected. Similar thing happens for the parameter -lr. When the value is greater than 1 the results are not affected, although when used 1 or lower the reflections are not counted in again.

Do you find these findings reasonable? Thanks, Ondrej!

Hello again,

Here is a little overview of a sensitivity analysis that we made, concerning the Radiance parameters for radiation analysis.
For the “Truth case” were used following parameters:
pt = 0.05, ds = 0.05, aa = 0.05, pj = 0.9, dt = 0.15 ,ad = 4096, dj = 0.7, dp = 512 ,lw = 0.005, xScale = 6, yScale = 6, ar = 481, as = 1024, dc = 0.75, av = 0, lr = 8, ps = 2, st = 0.15, sj = 1, dr = 3, ab = 8

The test model was uploaded before. The reflections values used for the ground were 0.1 and 0.2 for the context geometry.

First graph shows different variations of the ambient accuracy value starting with 0, which is resulting in a great error yet minimal simulation time, which is caused by disabling of the irradiance interpolation. The precision and simulation time increases rapidly when used values close to zero and decreases with greater values for -aa.

The next graph shows the impact of the “limit weight” parameter (-lw) while using the Truth case settings with -aa set to 0. From the results it is clear, that by setting the value of -lw to zero, we are able to get rid of the error caused by disabling the irradiance interpolation demonstrated in the first graph.

Looking closely at the results, we discovered that when setting the -aa = 0 and -lw to more than approximately 0.00005 the reflected part of radiation will be counted in only partially or not at all which will result in a significant error.

Has anyone experienced similar behavior? Furthermore, is there anybody who happens to know similar sensitivity analysis that could help us to prove our point? Thanks!

Ondrej

Hi @Ondrej, can you share the images (or the Radiance model) that you are generating? I dont have acess to a Rhino/gh installation at present.

If you set -aa to 0, as Greg mentioned in his post, you switch to pure Monte Carlo simulation. Then the ambient parameters, in the case of mostly lambertian geometery, are controlled through -ab, -ad and -lw. In rcontrib and rfluxmtx, ambient caching is turned off by default and the simulations are pure Monte-Carlo.
The value of -lw, at least in the case of rcontrib/rfluxmtx needs to be set to 1/ad. I guess for -aa 0 runs in rpict the same setting shoud apply. A year or so ago, I had a parametric run for those parameters while working on a tutorial for Radiance. Mostapha had created a viewer for those results: https://www.ladybug.tools/radiance/image-parameters

Hi again @sarith,

here is a model that I am testing. I’m examining 560 points on my facade that is based on a 3x6m grid.


Thank you for the additional information, although I’m using the Honeybee Legacy “Run Daylight Simulation” this one https://rhino.github.io/components/honeybee/runDaylightSimulation.html. And as far as I know it should be based only on rtrace simulation only so I am a bit confused when it is okay to mix these different settings of different functions.
To the link you sent with the analysis parameters, there’s this sampling parameter -c. Is that comparable with any other parameter from rpict or is that only for the purpose of a 2-3-4-phase methods?

Thank you very much! Ondrej

Hi @Ondrej, for rpict/rtrace comparisons you can refer to @MingboPeng’s comparison here: https://tt-acm.github.io/DesignExplorer/?ID=KeF5zn (@MingboPeng, in case you still have those original HDR images, any chance that we could have a comparison in falsecolor as well?).

Coming back to defining a baseline, relative error is a fairly reliable means of establishing convergence (see here, page 89). In your case, since you are comparing within simulations (and not with real-world measurements,) you can define convergence as when the results stop changing by a certain % within successive simulations.

Hi @sarith,
thanks for sharing the interesting links! The comparison study is looks very nice.
I just want to make sure that we understand each other. The study that I am conducting is investigating the amount of radiation landing on a facade of my building thus I am not generating any HDR nor falsecolor images.

I am interested in cumulative radiation values over a year and these values I am then using in a consequent thermal comfort evaluation. That is why I am convinced that I do not need to use the caching when I only want to evaluate the amount of radiation of a point.

The script that I am developing is supposed to predict thermal comfort of massing models so I am looking for a good balance of precision and simulation time.

Thanks for your feedback.
Ondrej

Yup, got it. Actually, the underlying mechanisms for generating HDR images or getting irradiance/illuminance for points is mostly the same. For both rtrace (for points) and rpict (for images) the effect of ambient params (-ab, -ad, -ar etrc) and direct params (-dj, -dt etc) are the same. Generating HDRs invovles some additional parameters in the form of view (-vp, -vd etc) and pixels (-ps, -pj etc).
For generating a 800x800 HDR image with full pixel sampling, rpict will generate 640,000 ray origins before starting the raytracing process. For ray tracing, the ray origins are defined by the user and are usually in the range of 100s or 1000s (320 in your case).