Sensitivity Analysis within Honeybee

Hi All !

I’m playing around with the capabilities of honeybee to create different idfs and run them parametrically. With a set of 300+ different idfs and simulations created by Honeybee, I want to analyse the spaces based upon their Thermal Autonomy values and their Daylight Autonomy values.

However, calculating TA and DA require the HBObjects to be re-input into the components to be read, which can be automated, but make it an extensive process to get objective values out of the various simulations… bit difficult to explain the process which needs to be done to get the analysed results out.

Does anyone have any suggestions on how you might be able to conduct a sensitivity analysis on all simulations run to determine which parameters are the most influential to a given objective (ie. TA and DA)?

Hi Elly,

i would look (in GH+LB_HB) for Pollinator or Design Explore.


Sorry for the long post, but I think this topic is very important.

I agree with Abraham. Pollinator or Design Explorer are great ways to analyze iterative studies. These data viewers allow easy sorting/exploration of the data but they don’t provide detailed statistical analysis.

Currently, one of the limitations with Pollinator/Design Explorer is that it relies on the user to find the patterns. Often those patterns simply boil down to “this parameter is most important.” That is amazing to have, but I’m sure some there are algorithms out there that could rip through the data to quantify how much more important one parameter is than another. Or identify those parameters/outputs that are statistically significant (or irrelevant). Did you include enough iterations to draw sound conclusions? What about an algorithm that exposes trends/patterns that are less apparent? Is this the kind of sensitivity analysis you are working towards? If so, I’d love to know more.

It sounds like you have the automation part down, but I’ll include the following for anyone else who is interested in doing this. I recommend automating a parametric model to run individual iterations one after the other (in series) rather than generating many models at the same time (parallel). Components like Brute Force or Octopus will automate sliders to update your model using your chosen input parameters. This creates the “re-input” you are looking for to trigger a new HB calculation. I like to connect integer sliders to stream gate components to create more complex data flows and limit the number of iterations I’m testing. For example, you can use a single slider to control multiple stream gates, updating many input parameters at a time with specific values. Rather than test every combination of every input parameter (this compounds quickly), you can test only those combinations you care about. Stream gates allow you to iterate through geometry options as well as numerical input parameters.

Once a single run has completed, you can save the input parameter settings and output values to an excel or json file or use a data recorder component to store the values in GH. Pollinator or Design Explorer then allow you to analyze all that data.

1 Like

Interesting Leland.

Curious about what will be Elly’s approach.


Hi Leland,

Apologies myself for the long reply! This is part of my Master’s thesis, so it is very interesting to me…

I am extremely interested in this topic for simulation, particularly in the early stages of design, mainly due to the large uncertainties that exist in these phases. Optimisation in the early phases may be helpful, but it does not provide the designers with more information on “where to go from here”. Once the designer changes a parameter to suit a client requirement, legal requirement or other, the optimised result may very well be thrown out due to the parameter being changed having such a large effect.

I am hosting several workshops and focus groups in the next month (one for students at Victoria University of Wellington, one for architect practitioners and one for engineering practitioners) to teach the basics of Honeybee and Ladybug within Rhino as NZ is very new to any form of distributed modelling methods (using visual language programmes such as grasshopper and dynamo to communicate between design tools and building simulation program tools). In the focus groups, I am not focusing on the tool of Honeybee so much as I am asking the industry its opinions on the feasibility and wishes of developments such as Honeybee.

I find that many informal interviews I have been having have pointed to the question: Would you rather want to know the optimised concept or the most significant design parameters which you should be wary of at the early stages of design?

I am amazed at the capabilities of Honeybee because it has been such a pain to remodel anything for E+ and Radiance in the past. I particularly love the ability to generate hundreds of idfs with varying parameters within 10min, without having to set up some form of macro to do it. The visualisations of Honeybee are awesome! To say the very least. But as someone who is interested in doing a sensitivity analysis, say with Thermal Autonomy, I feel like there is a lacking element to analysis from an engineer and research/academic stand point.

The way I have set up my files actually create 300+ idfs with all the various different parameters. The parameter ranges only vary from a low, typical and high setting for power densities, WWR, schedules and insulation. These have all been drawn from a large 5 year project where we monitored commercial buildings here in NZ to gain a better understanding of data for purposes like this. I then run them in parallel as batch files and re-insert the data back into Honeybee.

What I am playing around with at the moment though is that, due to the fact that the TA component require so many additional components to then analyse the data in that form, and also that it does not simply give a numerical value in % for the space’s performance, I need to re-evaluate the csv that it produces for further analysis.

I have only just begun to try doing some form of sensitivity analysis within Honeybee itself, but I was curious if there were already plugins within grasshopper which may already allow some form of sensitivity analysis.


I agree that optimization is limited. I strongly prefer using tools like Honeybee/Design Explorer to map out all possibilities and understand the underlying drivers of performance. Before we try to hit the bullseye, let’s make sure we are shooting at the right target. “Where to go from here” is a great way to put it. I couldn’t agree more. Also, small decisions made in early design (massing, orientation, program layout) can have a huge impact on overall performance, while optimizing one façade element in DD may be splitting hairs with limited benefit.

I think these tools have the ability to calculate better information and better communicate that information. In the end, adopting an informed design process, where the designers actually adapt their designs based on performance data, is critical for these tools to be effective. So it’s not enough to just calculate the next best metric, we need to inform design decisions. I’m curious what NZ designers think. Good luck with your workshops and focus groups. I hope you share the results with others on this forum.

I agree, HB makes the old school radiance interface look archaic. I could never go back.

Current HB components analyze results from a single zone and allow for a lot of customization of the data. I think most people develop their own custom analysis GH scripts to create custom graphics, graphs and metrics from the raw data output by HB. Design Explorer fills the gap of exploring multi-run data sets. Check out this link (Abraham included it in his post) Play around with the single zone energy model (middle option).

I don’t see a way around reimporting the data back into GH for post-processing of the raw data. On the daylighting side HB has components to take the raw .ill file and convert it into DA calcs. If you create your idfs in series, you can perform this post-processing automatically after each file is run before iterating to the next option.

To streamline your current TA calc process, I would automate a script to cycle through all of your idf file names individually and record the outputs into a format that DE can open. Essentially you would calculate TA from each individual csv file, record the results and unique input parameters that identify the run, then move to the next idf file. Once the automated process is done you can review the entire TA data set in DE. If it is automated, the fact that TA requires a lot of extra components isn’t a big deal. It will be tricky to save 8760 data though, as DE requires single number inputs. Perhaps you can create a json file that includes colored 8760 plots, that way as you go through DE you see the 8760 data associated with each run. It would work similar to how the example DE file captures the DA calc as a colored grid on the floor.

I don’t know of any sensitivity analysis components other than Design Explorer and octopus. I hope they do what you need. Good luck.

Hi Leland,

Thank you for the advice, I will endeavour to spend the next week exploring a way to do this. I am disappointed that I never fully learnt coding, otherwise I might have been able to do this process quicker by making my own script to analyse the results… perhaps a spagehti mess of native components and external cloud-based programs could help fill the need for sensitivity analysis in the meantime.

Thanks for the chat, and I will definitely be sharing the results of the thesis once it is completed and cleared for publishing in May-June.




Thank you for the very interesting discussion. Please do let us know the outcomes of your workshop and what work flow you decide to go with. I am wrestling with similar questions in my work with Perkins+Will.

Also learning to code isn’t that hard I suggest that you start with the New Boston Python tutorials here:…

Hi all,

I have been using Pollination with no problem at all and is great but when I shift to DesignExplorer although the numerical results appear normally the manual URLs I input from my DrobBox folder cannot be read. I only get the URL back in place of an image/3d model. If I replace my links with the links of the file from the examples then I get back images and 3d models. Any idea why this is happening? Has anyone tried it already successfully?