I was keen to check in on the workflow differences between adding HB shades through the HB Add Shade component vs just adding them at the final step directly to the model.
I’m working on a fairly complex model and was getting puzzled as to why the analysis was taking so long, and after a while of testing tried moving my shades, which had been added via HB Add shade, over to the shades input of the model.
The results were dramatic. The model went from being a 5-10min analysis, down to around 25 seconds.
I’d love to clarify if possible the reasoning behind this, not wanting to misinterpret results or potentially have an incorrect workflow.
My first guess is that HB Add Shades applies all of the connected shades in the model individually to each room. So in this case I had 3 rooms, which all would have duplicate shades models being brought into the model, effectively tripling the amount of shades. Is this correct?
Is there any benifit to adding shades via the HB Add shade component? Or vice versa, is there any potential issues with adding all of the shades only at the end via the HB model component?
I was getting significant lag in simulation. I haven’t really worked out all of the kinks in my algo but I saw a significant speed improvement when I rewired my code to run Rooms, Apertures and Shades parallel into HB MODEL component instead of in series into HB MODEL component).
That being said, I just went back to my preview HB Visualize By Type and see that the walls seem unaffected by apertures. (I can preview the deletion of apertures from walls if use HB Add Subface).
That being said (again), despite the funky preview when running the parallel methodology, I ran both methodologies through HB Annual Loads because I can get a result out of it in 3-5 minutes. Both returned the same EUI.
It looks like you were not using the “HB Add Shades” component correctly as this is really only intended to assign shades to a specific Room, Face or Aperture (not 3 Rooms at once). So you were essentially duplicating the shades 3 times for each of the 3 rooms that you assigned them to, causing the simulation time to be a lot longer but not actually changing the results at all.
So, when in doubt, just stick to the first method that you have there (assigning shades to the model) since this will not result in duplicated shades.
Hi @chris Relative to my own difficulties with run time on a 7 story office building, does the same rule you described above apply to assigning rhino surfaces as apertures to rooms…in other words, is assigning a whole building’s worth of apertures (Rhino surfaces) to a whole building’s worth of rooms (Rhino breps) (which may or may not touch each other) a gigantic waste of time? If the answer is yes it makes coding of large buildings even trickier and I think only solvable by careful Rhino layering.
I don’t really understand what you are asking here but I think the answer is probably “No”. If you are building a Honeybee model for Energy simulation, then you have to assign the Apertures to Rooms and leaving them without a parent Room won’t account for them correctly. Furthermore, the HB Add Subface component is smart enough to add the Aperture only to the parent Face with which it is coplanar. Maybe you should just watch this tutorial series to get a solid sense of the modeling workflows with the LBT plugin.
Also, if you’re building a 7-story energy model in Grasshopper, I would recommend purchasing a Pollination Rhino Plugin license. It’s always possible to build large Honeybee models in Grasshopper but it’s far easier to build them with the Pollination Rhino plugin, given all of the QAQC tools that are built into it. You can get your whole model set up correctly in Rhino with correct adjacencies, boundary conditions and windows and then bring the model from Rhino into Grasshopper for simulation, visualization and post-processing. This is what I do in all of my workflows these days and I honestly can’t see myself going back to the old way of managing many lists of Rooms to build large models.