5-phase method simulation times

Hi

Wondering if I’m getting correct simulation times for the 5-phase method or if my definitions are off. So I did a comparsion:

Running an annual simulation of an office space measuring 12 x 3.6 x 2.8(h) m. with three windows (defined as one group in HB+) and measuring 168 sensor points it took approximately

Honeybee Plus: 3.5 minutes
Honeybee Legacy (Daysim): 3.5 hours

When running the Honeybee Plus sample files for 5-phase method I get around the same time.

-I’m a bit in disbelief about the simulation times of Honeybee Plus, can anybody give an explanation as to why it is so much faster?

-Secondly when I compare the results of Honeybee Plus with Legacy there is quite a difference in certain points (up to 40 % for UDI), but I guess this is to be expected due to the improvements of calculation the direct solar contribution ?

The results with Honeybee[+] will be faster because rcontrib does not use ambient caching while rtrace does. Daysim, which is based on Christoph’s Reinhart’s research from 18 years ago uses rtrace. Honeybee[+] uses rcontrib.

(I am not sure if that was even one bit helpful. However, the other option is to suggest around 1000+ pages worth of journal papers, dissertations, presentations etc. to make that point).

Anyway, lets figure out first if your simulation parameters are right or not. There are two ways you can do that:

Option 1: Test for convergence
One way to do that will be to bump the -ad value to twice your current value and increase the -ab value by 1. Then check if you see a difference in results or not. The way to compare difference in results will be to pick a few hours randomly and see the results between the first calculation and second calculation with higher parameters. If there was a drastic change in results, then you will have to increase parameters further.

This is based on the idea of convergence testing. You can see an example below from http://www.ladybug.tools/radiance/image-parameters. Towards the lower right of the image sequence you’d notice that there is no difference in results.

Option 2: Test for fidelity
Essentially, all the annual methods, whether in Honeybee, HB[+], DIVA, etc. are finite element approximations of standard ray tracing calculations with continous skies. That approximation relates to the idea that a continuous sky can be approximated to a series of sky patches and that the position of the sun in the sky can be interpolated from three or four nearby positions out of a total of sixty positions. In Honeybee[+], we only approximate sky patches. The sun calculation is precise.
The way to test, the precision of a calculation through an (approximated) annual method will be to test its fidelity to an un-approximated claculation. So pick a time from your annual calculation, and then do a point in time simulation with climate-based sky for the same weather data and time. If your results are off, then you will have to increase the ambient parameters of your annual calculation till they match.
Here is one example of such a test by Mingbo: HB+, legacy and DAYSIM results

Five-Phase simulations should also take much less time than legacy version. However, it is hard to tell without looking at your model if you are doing it correctly. I dont have grasshopper, so @mostapha or someone else will have to weight in on this one.

1 Like

Not really a reply to your original question but as a general rule if you’re not dealing with so many dynamic window groups using daylight coefficient recipie is most likely a better option. You need high-resolution BSDF for the 5th phase in order to get meaningful results.

1 Like

Thank you @sarith and @mostapha for your replies !
Finally got around to this and started off by doing the convergence test as suggested. The office example chosen for selecting Radiance Parameters looks as follows

---------------------------------------------------Convergence---------------------------------------------------
Altered the approach a little bit, running three 5PM runs at each Radiance parameter step. And instead of comparing differences between runs with different Radiance parameters, I compared the three runs with the same Radiance parameters to each other. Reasoning being that the first two runs I did with different, around medium quality, Radiance parameters gave fairly similar results by random.

To quantify the differences between the three runs I compared all 8760x168 measurements between each of the three runs to each other. This was done by subtracting all 8760x168 measurements, between each run with each other, and taking the absolute difference, resulting in three 8760x168 “absolute difference” matrices. Then looking at the maximum difference for each of the three matrices and the average difference, resulting in three maximum differences values and three average differences values. Lastly averaging the three maximum differences (AM) and similarly the average of three average differences (AA). If it makes more sence in mathematical terms:

Math

(n = 8760x168)
Results as follows:

Rest of parameters of V-matrix set to qual = 2 and D-matrix to qual = 0.

---------------------------------------------------Fidelity---------------------------------------------------
Chose 21. of June at 12. Trying first a Point In Time (PIT) simulation with -ab=10 and -ad=300.000 (called medium res.) and then -ab=12 and -ad=1.000.000 (called fine res.). First odd thing that stands out is that the two PIT simulations has differences between them:


(Plan view, windows to the left)

When comparing the PIT simulations with the 5PM runs, similar metrics as above was used (only difference n=168). Still performing 3 runs for each 5PM Radiance parameter setup, got following results:

Fidelity

Quite big differences. When plotting the differences between the Fine res. PIT and the 5PM runs:

Interestingly the differences between 5PM and PIT become worse when dialing up the 5PM -ad from 300.000 to 600.000.

---------------------------------------------------BSDF resolution---------------------------------------------------
My motivation for using the 5PM (compared to DC) is because I’m running a parametric study, with currently 6 different IGU systems for each geometry.

The BSDF files used has the resolution of Klems (145x145 patches). Building on @mostapha comment and this article by Greg Ward ward-2011-var-res-bsdf.pdf (1.2 MB) I think the above differences between 5PM and PIT could be mitigated by a higher resolution BSDF, which I am currently developing for each IGU.

At final a question:

Building on this how does DC handle BSDF different from 5PM (T-matrix) ?

Any other comments or suggestions are highly appreciated

Hi @tobiaspedersentsp

How are you generating/acquiring the BSDFs for 5PM. Especially, the last part involving direct-sun calcs

BSDFs are (or should be) handled in-scene. @mostapha would be the right person to answer this.

This is indeed why 5-PM was developed by LBNL, so you are on the right track!

Some additional thoughts:

  1. If you are using BSDFs correctly, based on what I see of your model, anything more than -ab 6 in either PIT or annual calculations is an overkill.
  2. Does 300.00 mean Three Hundred thousand? If it does, what are the corresponding values for -lw (it should be a floating point that is 1/ad or lower). Mostapha and I were talking about setting it internally but I am not sure, if that has been keyed in yet.

Thank you @sarith

Through LBNL WINDOW; example: Triple_PXN.xml (1.6 MB)

Very good tip to know!

Yes it does 300.000 = 300000. Only -ad and -ab were dialed. -lw was kept at 5e-7 (default settings at complexity 2). Just checked and -lw stays put when you change -ad.

Hmm…okay. You might want to run those simulations again then. As per the creator of Radiance, for Monte Carlo simulations with Radiance, -lw should always be adjusted along with -ad (see slide 29). As a rule, always set -lw to to 1/ad. A value higher than 1/ad will result in some of the rays not being considered for your simulation. And this perhaps (only perhaps!) explains why your results are deteriorating with higher -ad values as you mentioned here.

Regards,
Sarith

Update (5 minutes later): I see that your values for -lw are already pretty low. I am not sure if the weight of each ray should be a constant value though. Setting a low enough value prevents them from being “killed” through russian roulette but I wonder if setting them too low might obscure the contribution from all the rays as they all have “low” weights. If I find something new, I will share it here.

1 Like

@tobiaspedersentsp
Hope you are all well! Did you find an answer to your questions yourself? If so, would you care to share them? Curious…

Incredible ad numbers you are running with. Is it radiance or acelerad, and did you get the docker up and running?

/Mathias Sønderskov

Hi @Mathiassn
Yes thank you - how about you?

Building on what we learned at the Radiance Workshop I would say that the primary factors for the disagreement in the fidelity study (comparing an hour of the annual 5PM study {rcontrib} with the point in time study {rtrace}) is:

-Discretization error from using klems resolution BSDF. With the discrete BSDF, the effect of having a 145x145 patches sky (annual study) vs a continous sky (point in time) will be magnified compared to if you had a continous formulation of the angular dependencies of the IGU/IGU with shading - this could be “glass” material or a higher resolution BSDF that better resembles the continuous dependencies.

-5PM in honeybee+ uses a different and more accurate solar algorithm to calculate the suns position compared to point in time from Radiance.
https://discourse.ladybug.tools/t/illuminance-discrepancy-from-honeybee-grid-based-honeybee-grid-based-and-honeybee-annual/4040/14

I ran the simulations over night or during weekend on multiple computers using Radiance. Havn’t been using Accelerad or docker - have you?