queenBee multithreading, number of tasks

Hi all,

I was testing the sample files (annual_daylight) and I decided to benchmark the sample file with a much lowered grid size, to increase the load. CPU count was set to 5 as I have 6 physical cores (or 12 with HT, but that’s not the topic).

With 44,770 points and -ab 2:
sensor_count = 30 --> 55min simulation (this is default in the bundled .gh file)
sensor_count = 200 --> 12min simulation (this is LBT default)
sensor_count = 1000 --> 6 min simulation
sensor_count = 8954 (points/cpu count) --> 6 min simulation

I’ll suggest to have the default set to somewhere around points/cpu count


1 Like

@Mathiassn ,

Yes, the overhead of subdividing the grids into smaller chunks can come to dominate the simulation runtime if it’s not properly set. I think your suggestion is a good one but it’s not the easiest to implement in a way that always works. Let us think about it and maybe I will implement a hack to do it for the time being, which we will have to replace with a “correct” way to do it later.

Thanks! this aided hugely!

1 Like

Hi @Mathiassn, in case you only have one sensor grid the logic is a straightforward as you mentioned. You basically want to distribute the sensors between the CPUs equally.

However this can get complicated quickly if you have multiple sensor grids with different number of sensors. This is a common case in the full building with different rooms with different size. That’s why I think this is something that the user should set up instead of us trying to automate it.

You should also consider the post-processing step in the overall optimization. The most efficient option for multi-processing is to merge all the grids initially and then break them down between CPUs equally. But that will add an extra step to bring them back again and align them with sensor grids eventually. It’s something that we did in Honeybee[+] with the idea of pushing the results to a database so we could quickly put them back together but that didn’t scale well using sqlite. Using other databases adds complexity to installation and cost in cloud solutions. There are still options that we can try but what I’m trying to say is that it’s not as simple as dividing the number of sensors by the number of CPUs.


I sort of agree, but it maybe calls for different workflows if you need a quick study or a large scale study.

The merge part sounds ok for smaller workflows imo. (As was done in legacy)

It just adds unnecessary complexity for the users (specially the novice ones who want to run small studies). I would aim for a default value and an override.