Each Iteration adds to RAM Usage - Honeybee

Hi,

First of all, thanks for the great work on HB! It is proving very fun and useful.

I am trying to do some optimization study on the cooling and heating load of a generic building by using Honeybee and Galapagos. There are two identical floors with 6 zones at the bottom floor and top floor separated by an adiabatic component in the middle. I have sliders affecting the thermal properties of the envelope, the WWR, and the shading depths.

I left my Galapagos running overnight and it ran out of RAM (have a 16GB RAM on Comp, Rhino was maxing out at 14.x GB).

At first I thought it was Galapagos causing this problem, but then I tried to manually adjust the sliders myself and found that each adjustment of slider would increase RAM, and very rarely it may decrease RAM. This was with the EPlus simulation component turned off, so I was basically just changing the parameters.

With about 17 changes in parameter sliders I was able to increase my RAM usage from 1.3GB to 4.6GB. This was with all previews off.

I tried the following:

UndoClear in Rhino - No effect

Solution -> Clear + Recompute - Adds to RAM usage

Is this something to do with Rhino? GH? HB? It seems that it is continuously remembering all previous states? Or would it have something to do with how I built the flow of GH (I’m quite new to it, so maybe it is something I am doing).

Greatly appreciate any help, would really want to be able to do some nice optimization runs. Thanks!

Hi Timothy,

I can help you more if you can share your file. There are ways to optimize the ram usage for repetitive analysis.

Mostapha

Timothy,

I knew it was only a matter of time before someone brought this up. The heart of the issue is that, each time a Honeybee component runs that alters HBZones, there is an entirely new copy of the zone object being made. So, if you have a large model, with many components that alter the zones, and try to run several iterations of it, you can find yourself writing a lot of copies of zones to your memory quickly, which will max out your memory in the way you describe.

Because of this, I have actually gotten a lot of mileage out of upgrading my computer from 16 GB to 32 GB. Still, as Mostapha suggests, there are a number of intelligent ways of laying out the GH script to minimize the copies of zones that get written to memory and will buy you more iterations before max-out. If you upload the GH file, we can understand the memory pressure points.

-Chris

Hi Mostapha and Chris,

Thanks for the quick replies. I am guessing, unfortunately, there is no way to clear the copies of “old zones”?

I have uploaded my GH script, and would welcome any suggestions and comments.

Thanks both of you for taking the time to have a quick look.

Tim

office_hb_v2.gh (635 KB)
office_rhino.3dm (206 KB)

There are ways to clear the copies which is why I needed your file. I’ll check your file later today.

Thanks for the help.

I updated the GH file a bit, as I found out that leaving some fields not defined made the whole component (EPWindowMat and EPConstruction) undefined during the simulation.

Tim

office_hb_v3.gh (632 KB)

Hi Timothy, I started looking into the file. I added a new method to remove older Honeybee zones but that doesn’t really make a huge difference. I made a change in update EPConstruction component that is making the biggest difference on my system. Can you test the attached file and report back if the changes have made any improvements.

office_hb_v4.gh (629 KB)

Hi Mostapha, thanks for the work.

My two observations:

The new component seems to slightly slow down the RAM usage increase, and it seems like there is a slightly higher probability that some RAM clears a bit after changing the parameters.

The script also solves faster after a change in parameter at the envelope thermal properties. However, there seems to be no output from the new EPConstruction component?

Tim

Thank you for testing. I wanted to make sure that I’m in the right direction. The output issue is a typo. I have a couple of other ideas that should help. Will get back to you soon.

Hi Timothy, Finally had a chance to re-visit this issue. Check the attached file and let me know how it works. I re-wrote how honeybee handles honeybee objects between the zones. This is as good as it can get in Rhino 5. In Rhino 6 we can make it even better!

office_hb_v4.gh (629 KB)

Mostapha,
I’m glad that we are switching to this method even if it means taking out the ability to assign values based on orientation on a couple of components. Switching to list access should address a lot of issues that we experience with large honeybee definitions.
I was just reflecting on the suggestion of using tree access to assign parameters based on orientation and I realized that there are a lot of other things that people will probably want to use the data tree for. I think it’s better to have a separate component to assign constructions or boundary conditions based on orientation and I can draw these components up soon.
-Chris

Hi Chris, Sounds good to me. Following Grasshopper’s conventions is always a good idea.

Hi Mostapha,

I am facing the same type of issue with a recursive voxel aggregation. Within a loop of Anemone I use honeybee to test recursive geometries. with the 40 000 points, and therefore voxels to be tested, my memory runs out quick and rhino crashes.

I have attached the gh file.

So far I have been trying to pause the loop save the gh file, close rhino and reopen and start the loop again. However as soon as the ram is busy, pausing the loop is not possible.

Is there a way to clear the cache at each iteration or a way around this?

Olivier

Hi Olivier,

Did you try to update Honeybee to the latest version? That should address this issue. It only keeps a copy of the latest iteration which is what you want.

Mostapha

Mostapha,

Can’t thank you enough.

Olivier

Timothy and Oliver,

I know that I am a year too late with this comment but, over the past few months, we finally identified the issue was that was causing your memory (and a lot of other people’s memory) to blow up. You can see in the Release Notes here, that the memory issue has finally been fixed:

http://www.grasshopper3d.com/group/ladybug/forum/topics/release-not…

-Chris

Hello Chris,

we meet at the Advancing Computational Building Design conference in NY this year.
great presentation BTW.

Regarding this memory issue, I’m experiencing similar memory problems with RH6.
I’m running energy and illuminance models at the same time for a residential building. It is just one typical floor with 16 thermal zones. Since it is a large number of iterations (1296), I divided them into 6 files to run all them in parallel in a server, around ~200 each.

These 6 Rhino sessions run fast in the beginning, about 2 to 3 mins the initial runs, then 4-5 mins at iteration 30, 6-7 mins at 50, 20-25 at 100, and then they crash. They arre eating all the memory until they die.

I have done this (running 6 or more RH sessions in parallel) before with RH5, but RH6 with the latest versions of Plus 0.0.04 and legacy 0.0.63 is showing these limitations. Do you know if there is any RH6 settings that I need t adjust? or any advice in this matter?

Thank you!

EDIT: This is true on Rhino 7, but not on Rhino 6. In other words, it’s due to Rhino/Grasshopper.

Hi @mostapha, @chris and @marcelo.bernal,

I pretty much have the same issue, running optimization with just sunlight hour analyses.
After around 1000 iterations, my 32 GB of memory are gone and the file crashes.
(Using the latest version of the legacy plug-in.)

Any tips on how to improve things?

Cheers,
Thomas

Hi @thomas.wortmann, I think we should open a new topic for this or use relatively newer topic. his one is two years old. I have some thoughts that I can share there. Thanks.

Hi @mostapha ,

Thanks, I have posted also in this much newer thread.