Radiance batch files on Amazon Web Services

Hello everyone.

I got inspired by @RaniaLabib awesome post on cloud computing for glare analysis:

[Create multiple batch files so I can run them parallel on a 128-processor server]

I am starting to experiment with cloud servers through AWS EC2, and am trying to run a few batch files I made through honeybee’s Fly component for some grid based simulations. I used a real simple setup on AWS with a Windows 2016 Server, installed Radiance, but couldn’t run any of the batch files (Windows cmd would come up and crash). Since I know nothing about cloud computing, does anybody have some advice on how to run radiance through this type of server?

On a different note, is there anyway to generate batch files without closing the cmd Windows on Honeybee ou Honeybee [+]?

Thanks!

Vinicius

Hi Vinicius,

Did you also install Daysim on the server as well?
Also, if you create a combined file that has the path to multiple batch files, you have to copy those batch files in the same path on your server. My combined batch file looked something like this:

Start C:\ABSN\1\annualSimulation\1_InitDS.bat
Start C:\ABSN\10\annualSimulation\10_InitDS.bat
Start C:\ABSN\100\annualSimulation\100_InitDS.bat
Start C:\ABSN\11\annualSimulation\11_InitDS.bat
Start C:\ABSN\12\annualSimulation\12_InitDS.bat
Start C:\ABSN\13\annualSimulation\13_InitDS.bat
Start C:\ABSN\14\annualSimulation\14_InitDS.bat

I copied the entire ABSN folder onto the server (under C), made sure Radiance and Daysim are installed, then ran the combined batch file by double-clicking on it.

Hope this helps!

Yes. Only set _write input to True and leave run_ to False.

Also see this repository which will automate what you’re trying to do in the near future. Basically what you need to do it to deploy the docker image to your sever and you’re good to go!

@RaniaLabib Thank you for your help! I tried installing Daysim on the server, but I am still getting a rtrace.exe - Application Error: “The application was unable to start correctly (0xc000007b)”. I’m gonna try to run a few more tests.

@mostapha Thanks! I’ll look into the files for sure. I was wondering last night if Honeybee/Ladybug has any limitations if ran through a cloud server. Do you think it would work? It might not be the most cost eficiente solution, but it would fit really well into my workflow.

Vinicius

Hi Vinicius,

I have succesfuly run the task you are wanting to do on Azure Batch Services. I am sure you can do the exact same thing using AWS Batch. The workflow (a bit clunky and hacky) was the following:

  1. Generate all of my files locally (using write only)
  2. Send files to azure blob (so S3 bucket for AWS right?)
  3. Queue tasks to run on pool of computers with a docker container that has all dependencies installed
  4. run all tasks and download outputs.

To run the task on the AWS nodes I created a docker image with radiance + python installed (so that I could test locally). If you’re not sure what I’m talking about this is essentially a mini virtual machine that you could run on your aws machine. Here’s a link to the docker image (it’s super hacky and only works for gridbased annual daylight) that might inspire you to either build your own: https://hub.docker.com/r/antoinedao/radhoneywhale/

As a quick idea of what can be achieved: I ran 1500 different options for a room using super high spec radiance parameters for grid based annual simulation in 4 hours for roughly ~$50.

2 Likes

@AntoineDao thank you for your helpful post! I understand that your docker image works for grid-based simulations only, this means, it can’t be used for annual glare simulations, am I right? I would like to build my own docker image to use of annual glare simulations, however, I’m not sure how I install daysim on a linux based system !! Do think this is possible? and how?

Thanks

Hi @RaniaLabib,
I had originally intended to run the docker image using Days in but encountered difficulties because building Daysim for Linux is a bit difficult (not impossible but I’m lazy :stuck_out_tongue: ). You should be able to run image based simulation using standard radiance (right @sarith and @mostapha?) in which case your main challenge will be how you send the image based recipe to your docker container.

Does any of this make sense? Happy to go into more or less detail if need be :slight_smile:

1 Like

@AntoineDao Actually you can use the image-based daylight coefficient recipe to do annual runs. This is an example from back when we were testing the scripts. The video is jumpy because of the hourly shift in sun positions.


If HB[+] was dockerized (is that a word?) as is, then it should be possible to call the image-based recipe from the core library. The vanilla option is to do multiple runs of rpict, however, that will take a long long time.
1 Like

“Dockerize” is totally a word, I suspect it will be added to the Oxford English dictionary within the next couple of years!

As for running rpict on Docker using core library that is totally doable, however I have not written to any methods to translate image based recipes to and from json which makes using Docker a little impractical for a few images and useless for many renders.

A quick hack for now would be to:

  1. Generate a ton of radiance files to run rpict
  2. Dump all the files on a cloud filesystem
  3. Launch a pool of computers on a cloud provider (using Azure Batch for example) and schedule the following task on each
    a. Move a rad recipe folder onto the virtual machine
    b. Mount the file onto a container running on the machine
    c. Run the simulation
    d. Move results back to cloud filesystem

If not in the mood for hacking I can have a look at writing to and from json methods for image based but I reckon I’ll get to it towards the end of the month.

1 Like

Yes but as @sarith said it’s not the right way to do it if we want to run annual Glare analysis. Image-based daylight coefficient method is the right way to implement it. We already have most of what we need for to_json method. We’re only missing View class and Image Collection. With that we should be able to generate all the images after calculating daylight coefficients for the image (similar to the video) and pass all the images through Evalglare.

I would suggest:

  1. Use Image-based daylight coefficient recipe to generate annual images (This needs some development to add this recipe to docker).
  2. Pass them through Evalglare command (This also need implementing Evalglare command but for now we can just run it as commands).
  3. Return the DGP values (and images if needed).
2 Likes

That’s odd, I didn’t get an email from the forum updating me on this post since 4/3 !! I stumbled upon my post today while looking for something else! Anyways @sarith, very nice work, I actually didn’t know that annual glare (image_based) can be done using the daylighting coefficient method.

@AntoineDao, I think @mostapha’s suggestion is more doable for me, I do know how to run the evalglare command and I think I can pass the files created by the recipe to the docker. I will work on this and keep you guys updated!

@AntoineDao, I have some experience in writing json, but I was wondering if you could point me to an online example, tutorial, or a reference that shows how to use json for this purpose ( for automating file traffic to and from the docker).

hi @mostapha,

I understand that it is possible to run multiple cases at the same time especially for annual glare simulations the way @RaniaLabib did very well (thanks). I now wonder how a single case of grid-based (annual) written in HB[+] could be decomposed to be dispatched accross multiple cores.

So far, I was brutally splitting the test points into 4 sets to run on 4 processors and writing 4 case folders.
For this to work in parallel through the cmd line I needed to make 4 copies of the Radiance folder and edit the command.bat file to set the corresponding PATH. It worked… but no doubt this is the most brutal way of doing it.

With regards to more sophisticated procedures:

Is honeybee-docker-daylight the solution to this? how does it address the decomposition of a case?
Is it correct to say that docker wouldn’t divide the case but rather issue a single job named “Json payload” which docker divides into small pieces before dispatching into the cpu power.

I can’t stop thinking of the butterfly commands “decomposePar” + “mpirun -np 4 simpleFoam - parallel” + “reconstructPar”. How similarly applicable is this in Hb [+] considering the different matrix calculations?

Hi @OlivierDambron, I missed this one until now that I saw it searching for something else.

Most of Radiance commands support -n option which works similar to -np in OpenFOAM.

2 Likes