MemoryError with LBT Radiance Recipes

HI @chris

Thanks for these tips, they make a lot of sense.

I’m testing a Direct sun hours case with a 332mb hbpkl, 493916 points, sensor count at 16463, using 30 workers (my hardware has 20 cpus with double thread, and am just putting to work 30 threads, and 64 GB).
It is returning a memory error of which I attach the log file here logs.log (55.9 KB)

Would you know what would be the bottleneck here?

I found that even with 1 point in my grid, I was getting a memory issue.

It seemed that issue was within the hbjson or hbpkl.

I was initially meshing my breps into the hb_face component. However it detects an error as follows :
“1. Solution exception:There must be at least 3 vertices for a Face3D boundary. Got 2”

It seems that I will need to find a way to narrow down this error in the large geometry

I know this isn’t the most helpful answer but it looks pretty clear that you are just running out of memory when trying to load the Model sensor grids from the .hbpkl file. Are you able to re-load the model into Grasshopper from the .hbpkl file? That will help us see if it’s something specific to how cPython is loading things vs. IronPython.

hi @chris

The solution to the previous error I detected was to adjust the tolerance and the units.
This occurs when doubleclicking on a GH file directly , as it opens a default Rhino file (ie: Large Objects, meters vs. Small Objects, meters) that may differ from one computer to another.

While working on these large models (213 933 surfaces, 9 250 000 sensors), this time I encountered a limitation prior to being able to write the hbjson file. All components upstream have worked until this one.

Would you know what causes this memory exception?

Best,
O

Hey @OlivierDambron ,

That’s the result of the fact that we output a JSON file that keeps track of all the sensor grids in the model:

This file gets used by all of the grid-based recipes to help the process understand all of the grids to be simulated.

So the short answer is that your model has too many sensor grids and you might consider joining some together. I don’t know if @mostapha has any thoughts on making this grid_info.json file optional in the output rad folder but removing it probably means that the resulting rad folder cannot work with any of the current recipes.

hi @chris

I’m afraid I’ve been flattening all points generated to create a single grid, that way I can reduce the overhead time of decomposing and recomposing subgrids as per your explanation here : Rules for optimal sensor count setting with lbt-recipes - #3 by chris

Yet the error persists.

I think my best suggestion for a quick fix right now is just to split up your sensor grid into multiple simulations. For a longer term fix, I don’t know if we’ll really be able to find any satisfactory solution since you are really pushing the limits of file size but maybe we find something that works if you post a link to a sample that lets us recreate the issue.

hi @chris

Thanks, here is a sample file. https://we.tl/t-8lVFts7dVY
(To avoid wetransfer links when uploading larger files, would you have a recommendation?)

It is quite big, the context geometry is plugged in, however I left the test surfaces unplugged to avoid waiting too much.

After writing the case I would navigate into the folder via the command line and run:
“lbt-recipes run direct_sun_hours inputs.json --workers 7”

2 posts were split to a new topic: Memory Error when Creating Rooms