I changed the timestep in both calculation recipes in the file I last uploaded. The differences are shown below: there doesn’t seem to make any difference for ladybug calculation results but it could make for radiance. However I wasn’t able to finalize my study as when I used a timestep superior to 2, the Direct Sun Hours did not return a .ill file. The file is not written, and the recipe does not return any error.
My best guess about what is going on here is that the ambient divisions (-ad) or the ambient accuracy (-aa) of the Radiance studies isn’t good enough to resolve some of the very small frame surfaces that are in the model. So boosting these parameters should make the difference go away and I think this is the intended behavior, right?
The strength of the Radiance approach is that you can adjust the parameters to your desired level of accuracy vs. speed.
Thanks for finding the bug. This seems to affect all of the recipes that were previously able to handle Wea timesteps greater than 2. The error is the same in all cases:
I was wrong that it was a bug on our end and I have an update: The reason why the simulation was not completing for timesteps greater than 2 is that Radiance historically had a limit on the number of modifiers (in or case, sun positions) that could be run using the rmxtop command. However, just a month ago, Mostapha managed to convince Greg Ward to remove this limit and you can see this on the Radiance GitHub:
So, if you uninstall your current Radiance and install the latest Radiance from the GitHub, you will be able to run all of the recipes with as many timesteps as you want:
It’s impressive the amount of work it takes to squash all these little bugs. Kudos to you guys, who seem to have an endless drive to look under all the engines to find out what went wrong and why.
yes, I am aware but, as I said one post above, the radiance settings for the Direct Sun Hours are hidden, so I followed Mostapha’s suggestion of increasing the timesteps. It’s not the same, yes, but perhaps it would be enough to get the results closer.
I’ll find some time to update radiance and update the comparisons.
Thank @pmcmm . That’s a good point about the radiance parameters and I’ll admit that I’m not sure why @mostapha did not expose them on this recipe. Mostapha, do you think we should expose them so that people can adjust the -aa and -ad at the least?
I found some time to review your latest modifications. Everything is working well, I managed to run simulations with multiple timesteps! As we expected, these don’t help in narrowing the gap between the two calculations, the values are quite similar regardless of the timestep chosen. See the table below:
Can I start using the new radiance 5.4a you linked above for all calculations? The compatibility matrix shows 5.3 for [DEV], was it just not updated or it’s not recommended?
I made a test of the Direct Sun Hours with Ladybug and HB Radiance recipe with LB 1.3.
The explanation of the timestep shown below and @chris mentioned reply seem different. I am not very clear. Does it mean that the sun positions of the Wea are hourly now and that the timestep needs to be set at both components like we do with the Ladybug method?
I tested both methods on a design with context buildings and the results of the colored mesh are slightly different on the facades. Both timesteps are set to 1 and 1 m grid.
Do you think it is the difference of timesteps (asked above) or the -ad, -aa parameters of radiance which are default of the compoenent?
Yes. To do a direct sun study with a timestep other than 1, you need to plug the timestep into both the Wea and the Recipe component. Also, if you are plugging in hoys_ for the Wea, you should also make sure that these HOYs are cognizant of the timestep (using decimal HOYs to select the sub-hourly sun positions that you want to evaluate).
Bear in mind that Weas at a timestep of 1 will evaluate sun positions on the half-hour (eg. 10:30 instead of 11:00). Conversely, the defaults of the Ladybug Sunpath are set to evaluate the position on the hour. So make sure your Ladybuyg Sunpath is set up like this if you want to do a comparison:
If you are using sun positions on the half hour in your Sunpath and you have some really tiny detailed geometries in your model, then it’s possible that the -ad is to blame for the differences. I think I may expose the -ad on the “Direct Sun Hours” recipe if we realize that this is the source of issues.
I just manually updated my DirectSunHours analysis script from LB Legacy (SunlightHoursAnalysis).
The LB Legacy script could use 45,000 context geometries (imported from Revit) and take about 60-90 seconds for results.
The LadyBug Tools version (1.5.0) even after creating one big building extrusion, so about 44,999 less context geometries, is taking about 10 minutes.
Has anyone experienced this drastic change in performance, or am I doing something weird?
As far as I’m aware the new and old both use the same native Rhino mesh intersection methods, so there shouldn’t be any major change in performance.
The one factor I can think of that might have changed is the meshing process, I’m not sure what meshing the old component used.
The time it takes will most likely depend on the detail (eg face number) of the geometry and context as meshes, number of actual objects eg breps is less of a driving factor. If you input meshes instead of letting LB use its internal meshing methods (which I believe differ from default Rhino/GH meshing) then simulation might speed up.
Any curved geometry is likely to be a cause of longer simulation times, likely due to internal LB meshing resulting in a very large number of faces.