I am currently running an annual irradiance simulation using Accelerad through lbt_recipes (Python).
Setup:
~100,000 sensor points
cumulative_radiation recipe
Radiance parameters in FAST mode (very low, e.g. -ab 1, -ad 256)
workers=1 (for testing)
HBJSON models load correctly
Problem:
After the actual simulation finishes, the workflow gets stuck in an endless loop at the AccumulateResults step.
The log shows how it keeps restarting “Started running AccumulateResults... → ...finished running...”, but the counter increases indefinitely ([30/31], [31/32], [32/33], …), never finishing.
Have you tried fewer sensor points, like 1000? (just to check if it works)
You can also try the annual_irradiance recipe instead, although it will also calculate metrics that you don’t necessarily need (average and peak irradiance). This recipe is using NumPy in the post-processing.
If neither works please – if possible – share the model here or in a private message.
For my understanding, is the lbt_recipes CLI tool a way to run simulation outside of grasshopper to process information faster then in the “Native” GUI inside grasshopper ?
The recipe components inside Grasshopper are also using lbt_recipes. Whether you use Grasshopper, lbt-recipes CLI, or just lbt-recipes in a Python script, they will all execute the same run method.
It will probably be insignificantly slower in Grasshopper because the recipe components also load the results so you can connect them to other components.
If you mean if you can start multiple instances of a recipe with lbt-recipes CLI, then yes. But if you do this you should be aware of the number of workers in the recipe input. When the recipes perform ray tracing on sensor grids they will split and distribute the grids based on the workers – there is a minimum sensor count recipe input to avoid distributing very small grids.
Let’s say you need to run a recipe 20 times. On your machine you have 20 cores available. If you run one recipe at a time you would set the workers to 20 to make sure that it will split the grids into at max 20 chunks. If your total sensor count is low, then it will never split the sensors into 20 chunks meaning you will not utilize 20 workers during ray tracing.
If you instead want to start all 20 runs at the same time it would not make sense to set the workers to 20 for each recipe – in this case you would set the workers to 1.
so the workers operate one after another. but when i set them to one can i still distribute lets say 20 simulations to run simultaniously on 20 cores ?
or does it make only sense if i want to distribute huge sensor grids to be able to calculate them one after another without overloading duo to too many sensors initially ?
The workers are how many tasks can run at once during the same simulation (this also includes non-raytracing tasks). They do not run one after another – the recipe will maximize the workers so if we have 6 tasks running, and it needs to schedule another task it will do so. But if we have 20 tasks running, and it needs to schedule another task it will wait until one has finished.
If we split 20000 sensors into 20 x 1000 sensors and we have 20 workers, then the recipe will start the ray tracing for the 20 sub-grids simultaneously. That is why it does not make sense to have 20 workers for each simulation if you want to run all simulations at the same time. In this case you would have 20 simulations x 20 workers x 20 grids* = 8000 ray tracing tasks running simultaneously (at most).
*Assuming that the total sensor count is large enough to split it into 20 grids.
Yes, if you run 20 simulations with 1 worker each you will have 20 simulations x 1 worker x 1 grids = 20 ray tracing tasks running simultaneously (at most). If the worker is 1 the sensor distribution will always respect that value meaning the sensors will not be distributed into smaller grids.
With 1 worker the recipe will always run one task after another so if you have 20 simulations running they will never run more than 20 tasks simultaneously. This includes all tasks, e.g., translating the model to a Radiance folder, creating a Radiance octree, creating a sky matrix, running ray tracing, post-processing.