Validation of HB/LB with experimental data

Hi

I am working on validating a worfkflow developed with Honeybee/ladybug with experimental data I have collected and I am trying to find previous studies from people who have done something similar (to reference and read). I was wondering if anyone has carried out this type of work and if so, if they could send me the link to the publication :slight_smile:

Thanks

Ellika

1 Like

Do you intend to validate workflows? or Do you intend to create a calibrated model? I believe it is the latter?

Hi

Sorry if I was unclear. I have made a model of a test facility for which I have different measurements of daylighting levels and air temperatures I collected last summer over 2 months (and with weather data). I am calibrating the model on a couple days only considering a handful of parameters but then validating that model on a longer period of time (both in terms of the thermal model and the daylighting). So I am looking for other people who have done something similar, so I can check out their work and take a look at the results they obtained.

Hope this is clearer!

Ellika

Yes. This is calibrated modelling. IBPSA proceedings and MIT Sustainability Lab will be good places to start for publications.

1 Like

Try also google scholar. Should be plenty of material there.
-A.

Hi

Thanks for the tips, I am familiar with both IBPSA and google scholar (and others) !:sweat_smile: but it’s not that obvious to find model calibration publications (or model validations) which specifically use honeybee and ladybug. I know there are hundreds of papers for E+ or Radiance individually, I was just asking here in case someone had specifically used ladybug/honeybee and wanted to share the link to their publication (or knew of one that is good/interesting).

Thank you anyhow :slight_smile:

Ellika

1 Like

This study, if it was published, might be relevant: Maximum allowable error (avg illuminance)

Since Ladybug/Honeybee are only interfaces that prepare geometry and information for the simulation engines to work, the tools to have little bearing on the calibration process.

Hi Ellika,
I need to find some validation for ladybug tools calculations and results. something like comparing the results of different softwares. I am wondering in your research you found something and be possible to share it, as I am not finding any results.

Regards,
Azin

Hi @az.sanei You can refer to this newest paper:
Co-simulation and validation of the performance of a highly flexible parametric model of an external shading system

2 Likes

Thank you @minggangyin, Do you have any information about the energy side of ladybug tools also?
All I can find are about radiation, daylighting, shading…
I am wondering to know if anyone has ever done validation on energy calculations?
I found relative topic in this link for anyone who get here and looking for validation.

I believe your question is not “fair”.
LB uses existing AND validated engines for calculations. As a way of talking, LB is an interface for such engines. If you want “validation” for energy calculations you should search for the use of EnergyPlus (same for Radiance as the engine for daylighting). You will find thousands of them.
The fact that someone used LB for research, and published a paper, is not a validation in itself.
As for comparing different software, again, if all of them are using E+ as their simulation engine you should get pretty much the same results. If you don’t, then probably the setting are not the sasme (and this is a different question). You can compare, though, the UI, which one is more comfortable to use.
-A.

5 Likes

Hi

The work I did was not to validate EnergyPlus or Daysim. Both have been validated for many years. What I did was to look at what the error was when I used a context element as a shading system, instead of the predefined shading device component. The value of this is that there are limitation in using the predefined shading device if you want a free-form facade or when you want to generate a complex shading using optimization without having to generate BSDFs beforehand.

@az.sanei, in building simulation there are two ways to validate results. The first one is to measure experimental data and compare to simulation results. If you read my paper, which @minggangyin was so kind to link, you will actually realize that I did consider the thermal side of the tool too. Experimental validation is expensive (because you have to rent a test facility with many sensors) and so you will mostly find other validation work on shorter simulation periods, as I did. The other way to validate a software for longer time periods and more cases is to use the BESTEST cases (here is an article about them https://www.researchgate.net/publication/287369055_Twenty_years_on_Updating_the_iea_bestest_building_thermal_fabric_test_cases_for_ashrae_standard_140)

The procedure here is to use standardized input for several benchmark cases and compare the results of different simulation engines. This is for energy simulations. As @AbrahamYezioro mentioned, Honeybee uses EnergyPlus as an engine, so there shouldn’t be much of a difference unless you are actually not giving it the right input for the case you are checking. If this is what you are looking for, then you should start looking in that direction, or you can even do it yourself (we give this as an exercise in a master’s degree I teach, it’s not very difficult :slight_smile: ) (edit, there is an example of it here https://en.sj.dk/media/2517/building-performance-simulation-in-arcitectural-design.pdf)

Good luck
Ellika

5 Likes

Thank you @EllikaCachat actually this practice that you mentioned can be the answer to what I am asked to solve. thanks for explaining the topic very accurately.
Best,
Azin