Speedups in daylight simulations

Hey @mostapha @mikkel , have you considered the techniques for annual simulations that climatestudio are using to speedup? Or are they patented etc?

I think they are more or less doing the method of mark stock here, just in a grid based instead of image based…
http://markjstock.org/radmisc/aa0_ps1_test/final.html

Mark scales up image 16x and reduces -ad to 4, 8, 16 (instead of 2048 or more) and then he downsamples.

I think climate studio does the same simulation 2000 times for each point, but with ad 1(or 2). and then picks the average/highest/sum result out of the 2000.

Could be interesting to experiment with.

3 Likes

Also take a look here!

4 Likes

Hi @Mathiassn! Are you suggesting to breakdown the sensor grid into a smaller grid and running the study for a smaller ambient division because now you have more sensors and then remap the results back? Have you tested it yourself on a sample grid to see if it improves the speed?

Havent fully thought this through yet, but yes something like that - OR just duplicating the grid 1000 times, run -ad 1 simulation on each and then combining results. Again - I’m unsure whether median/average/sum makes more sense here.

In astro photography people do something similar - to keep shutter time low, you have very high ISO. And to remove that ISO noise you put 3 (or more) images on top of each other and you take the median values of each pixel.

Whenever I get the time I will try to do 100 coarse grids and combine them somehow. rcalc/rcollate/rmtxop are super powerful so should be doable, alternatively with pandas/NP.arrays

1 Like

Interesting! Basically, try to remove the noise by adding more data points for the same sensor. I have to recheck Radiance documents but how is this different from a lower -ad number? Are you going to bump up the number of bounces to make up for that? I would be interested to see the comparison of the results.

Sounds good. Have you seen honeybee-radiance-postprocess? It uses numpy to handle matrix calculations.

3 Likes

Haven’t looked at the postprocessing yet! Looks nice though :blush::blush:

I think this method differs in a way that you want a lot of rays from the point (ie the first bounce) but you don’t want each ray to split up into multiple new rays which is exponentially growing. I think. -lw parameter can also limit unused rays afaik.
But I’m not too deep into the radiance core to confirm! All I know is that on the render side it makes nice renders in shorter time for complex geometries. AND not to forget, Mark has disabled the ambient cache -aa 0.

3 Likes

Isn’t the climate studio fast, because they use Accelerad and GPU?

https://web.mit.edu/sustainabledesignlab/software.html

Nope climate studio uses CUDAfy (and/or ILGPU! but im not an expert) GPU for the matrix calculations. The raytracing is cpu.

An example in LBT is shown here

1 Like