Hardware suggestions - Rhino/GH update

I just posted the following on the main GH forum.

“I know this has been discussed in the past but I would like to ask if there are any new considerations relating to an optimal hardware setup with the upcoming Rhino and GH updates in mind.”

Are there any additional considerations relating to Honeybee and Ladybug, and programs they link to? I will be conducting a lot of energy and daylighting analysis and also hope to get into CFD with butterfly.

I would appreciate any input!

Hi Ernst,

I don’t think there is a single answer to this question. It depends on the type of the analysis you are trying to run.

Generally speaking the more RAM you have the better. Specially when it comes to butterfly/CFD/OpenFOAM you want to make sure you have enough memory for snappyHexMesh.

Both Radiance and OpenFoam can take advantage of parallel runs in non-Windows environments. Currently OpenFoam is using dockers to run OpenFOAM from Windows which is what Butterfly is using so you can run the analysis in Parallel even from Windows. In both cases having more cpus will decrease the time of the analysis.

Here is two benchmarking tests for Radiance: 1 and 2, and here is benchmarking test for EnergyPlus. You can find it both for the computer speed and processor speed.


Try to get a NIVIDA CUDA-enabled graphics card and install the Radiance ad-on that allows calcs to be run with GPU.


Thank you for your thoughts, Mostapha.

Thank you. I was not aware of this program.

Any thoughts on Nvidia Quadro vs. Geforce. The high end Quadros come with ECC memory. Is that worth investing in since I may be running simulations with CUDA? At the same time, a guy from Puget said that for the same money you get much better CUDA performance with a Geforce card. This seems to boil down to: is 16Gb of ECC memory (as opposed to 8Gb non-ECC) worth $1,000.

1 Like

I think 16GB memory is not enough for complex geometry.I suggested update to 32 or 64GB memory.

Generally, all of the engines that Ladybug Tools links to use the CPU. So you are better off investing your money in a better CPU rather than GPU.

The only exception would be if you are trying to get Honeybee[+] to run with Accelarad, which has been mentioned on this post and runs Radiance with the GPU. While I don’t know of anyone who has tested it yet, the new Honeybee[+] API should, in theory, be able to work with Acceleard since Nathaniel Jones who wrote Accelarad said that he basically replicated all of the Radiance calls. Again, you are taking on a project if this is how you decide you want to use Honeybee[+] and it’s not intended to work out of the box with Accelerad. So, for the vast majority of people, you are best off with a good CPU and your GPU doesn’t have to be anything fancy.

Depending on whether you will be running more parallel processes (radiance, butterfly) or single core processes (energyPlus and all other study types) you will want a CPU with more or less cores. For parallel processing, you an get a good bang for your buck by getting a CPU with low GHz per processor but with a lot of cores (like an intel Xeon). If you are more of an energyplus or other study type of person (like myself), you are better off putting your money into a CPU that has a higher GHz per core with a smaller number of cores (like an intel i7).

I hope that helps,


Late to the party but: THANK YOU FOR THIS.