Rules for optimal sensor count setting with lbt-recipes

hi @chris

I was wondering if there were any best practice to set sensor count when running a lbt-recipe.
I understand it is the number of sensors to divide a grid into sub grids as per : “Integer for the maximum number of sensor grid points per parallel execution.”

Say we have 10 cpus(workers), 1000 sensors and set 200 as a sensor_count.

Should the sensor count be set as Nsensors divided by Ncpus?
What property of our hardware needs to be regarded when setting the sensor count?

Kind regards,

This post may help!


Unfortunately, the answer is that “it depends entirely on the type of recipe you are running, the radiance parameters, the size and number of your sensor grids, and the specs of your machine.”

For recipes and radiance parameters where the time it takes to process each sensor is relatively quick (daylight factor, direct sun hours), the overhead of splitting the grids and merging the results might not be worth it and you might as well keep the sensor count high. For studies where each sensor will take some time, you will get more mileage out of reducing the sensor count.

Also, bear in mind that setting the sensor_count to something higher than the number of sensors in a given grid of your model has not effect on the simulation of that grid. So, to make use of parallel processing during Radiance ray-tracing, you either need to have multiple sensor grids in your model or you want the sensor count to be at most half of the number of sensors in the grid.

1 Like

2 posts were split to a new topic: MemoryError with LBT Radiance Recipes