Create multiple batch files so I can run them parallel on a 128-processor server

Hello everyone,

I’m going to calculate annual glare for 100 Rhino views. I would like to create 100 batch file so I can run them parallel on a server that has 128 CPUs. I have two questions:

1- What is the best way to create 100 batch files, one for each view?

The only thing I can think of is to create a list of the names of my views and connect it to “list item component” then animate the item index slider. However, each time the slider moves simulation starts then I close the Dos window to get a batch file, this means I have to close 100 windows (one at a time). Is there any way Honeybee can write batch files without running simulations.

2- I read somewhere that Radiance uses 1 processor, Does this means that I can run all 100 batch files simultaneously on a server that has at least 100 CPUs and get results in a speedy manner?




You ask this question at a good time as Mostapha and I are in the process of making the components more amenable to applications such as this. Whatever the final workflow we end up suggesting, I am confident that it will involve setting up sliders to run through design spaces for cases like yours. So setting up the slider and a list of views should be relevant to your situation no matter what.

Animation of sliders works but you can also use the “Fly” component to run through all of the combinations of multiple sliders.

I think we will add in an option soon to run energy / daylight simulation components all of the way to generate all files (possibly including batch files) but just not execute them. This way, you don’t have to close out of the window each time.

We will also write something to execute all of the files for you but this part may vary depending on the parallel processing platform used. In any case, your logic about 100 batch files on a 100 CPU computer makes sense as long as the operating system of this computer knows to run each batch file on a separate thread (which I imagine it does but I haven’t worked too much with supercomputers yet).

Stay tuned and I will let you know when we add in the option to generate all files from the simulation components.


Not sure if this is an idea but the mpi platform has commands like bind-to-core and by-socket, which bind processes to specific cores. I know it’s used in CFD studies, could be used here as well(?)

Kind regards,



I never used supercomputers before, this is my first time! I’ll look into these commands, I think these commands will be extremely helpful in my situation.

Thanks for the help.



I just wanted to share my experience performing 400 annual glare analysis on a monster 128-CPU server. So here it goes

1- The server is based on Amazon’s EC2 service. The server has 128 V-CPUs and 1.9 TB of RAM. I think I’m going to start a GoFundMe campaign to buy one for myself :slight_smile:

2- The server’s cost is about $13 an hour. I get free access to supercomputer through my university and because I earned an NSF Honorable mention last March, however, the supercomputers available through both resources are a little complicated for me to use, as opposed to the one available from amazon that has Microsoft server 2012 already installed.

3- I wanted to run 400 annual glare simulations for 400 different views.

4- I tried a to perform annual glare simulation for one view on my Dell XPS that has Intel Core i7-6700HQ processor and 16GB of system memory. The simulation took 2 hours to complete. Radiance parameter ab was set to 6.

5- I wanted to obtain the batch file for each view so I can run them on the server. So I used the fly component to run all 400 simulations and closed the cmd windows, that wasn’t bad ( for me at least) because I asked my son to this job for me, he was just glad to help me :slight_smile:

6- I created one batch file using this cmd command:

dir /s /b *.bat > runall.bat

This created a file with the path to each .bat file. I edited this file in Notepad++ to include the word “start” at the beginning of each line. This was done using the “find and replace” dialogue box.

7- I split my newly created batch file into 3 batch files, each one has about 130 file names and " start" before the file names.

8- installed radiance on my server

9- Ran the first batch file on the server, this started 130 cmd windows performing my simulations, CPU usage was anywhere between 90% to 100% and about 105 GB of RAMs were used.

  1. It took about 5 hours to complete all 130 simulations, I expected to run all in 2 hours but can’t complain because this would’ve taken about 260 hours to run on my laptop. After the simulations done I ran the second and then the third batch files ( total of about 15 hours).

  2. I got 400 valid dgb files. Couldn’t be happier!


Hi Rania, Congrats! Thank you for sharing your experience. Made me smile! At the same time, it confirms that we really need to cloud-enable Honeybee to let users such as you explore computationally intense stuff without having to close 400 batch files manually. Good luck with your research.


Woohoo! Very exciting, Rania! Thank you so much for sharing. We will definitely start putting in some capabilities to make it easier for running cloud simulations like this.

1 Like

Thanks for sharing the info about your experience. Is there any update regarding more user-friendly ways of cloud simulations?

Hi @AryanShahabian, I believe @mostapha is currently working on a user-friendly cloud-based solution. The workflow that I mentioned in the post above is kind of an outdated one. I was able to come up with a better approach that facilitated the use of 1000 computing nodes in parallel. The new approach is explained in a conference paper that is going to be presented at the upcoming IBPSA conference in Rome. I can share it here when it gets published on September 2nd.

1 Like

Hi @RaniaLabib, would you be able to share your conference paper please?
I’ve read this thread with interest, and would be keen to know more about your process.


@jwoodall you can find the paper at this link