I am aiming to have an urban scale solar irradiance simulation using Radiance and Accelerad through HB Point-in-Time Grid-Based.
Since doing simulation in GH can easily run out of memory with millions of sensor grids, I tried to run in exterior Python environment using LBT Python SDK inspired by this. Here is the code:
Thanks! Can you zip that folder (any of the “PointInTimeGridRayTracingLoop” folders in the debug folder) and share it with me so I can test it? You can share it in a private message if you don’t want to share it publicly.
I know @Nathaniel is usually responding in here when you ping him, but you might want to head over to the Accelerad Users Google Group as well. Maybe he can tell what is the best way to use Accelerad for urban scale studies of this size.
Here is some information that might be useful for Nathaniel:
The sensors are far from origin. Here is (x, y, z) for one of them: 35777.2769688 42692.4376701 22.5489130435.
In the whole model there are ~81.7 million sensors, but LBT will distribute those among the workers/cpu count specified by the user. In this example around ~5.4 million sensors per worker.
Below is the log for running ~5.4 million sensors.
The empty files are likely a result of the CUDA error that @mikkel documented. This may be a result of the large number of sensor you have. You could try running your model with -aa 0 to skip the calculation where that error occurred.
As @mikkel pointed out, you also have a very large bounding box and you are using multiple workers. Both of those can cause issues for Accelerad, reducing accuracy or slowing it down, but they won’t cause empty files.