Parallelizing Direct_Sun_Hours

Hello,

I am working on an annual Direct Sun Hour Analysis for a rather high-resolution study_mesh (262.144 faces). Since the process is computationally intensive, I am looking into ways to parallelize it. The way I handle it at the moment is by dividing annual hoys into monthly batches, running the analysis for each batch and adding the results in the end.

def run_direct_sun_study(batch_index, hoys_batch):
    sun_vectors = [sunpath.calculate_sun_from_hoy(hoy).sun_vector for hoy in hoys_batch]
    direct_sun_study = DirectSunStudy(
        sun_vectors,
        study_mesh,
        context_geometry_list,
        timestep=1,
        offset_distance=0,
        by_vertex=False,
        sim_folder=None,
        use_radiance_mesh=False
    )
    direct_sun_study.compute()
    direct_sun_hours = direct_sun_study.direct_sun_hours
    return direct_sun_hours

# Run Direct Sun Study for each batch in parallel and store the results in lists
with concurrent.futures.ThreadPoolExecutor(max_workers=cpu_count) as executor:
    futures = [executor.submit(run_direct_sun_study, batch_index, hoys_batch) for batch_index, hoys_batch in enumerate(hoys_batches)]
    all_direct_sun_hours = [future.result() for future in concurrent.futures.as_completed(futures)]

The process runs successfully when I initiate it sequentially(batch per batch) but it gives me error when I run in parallel. The error seems to be related to the octree file generation by Radiance.

b'The process cannot access the file because it is being used by another process.\r\n'
b'rcontrib: fatal - (scene.oct): not an octree\r\n'

Any ideas on how I can resolve this would be very valuable.
Thank you in advance.