# Spherical Sampling

If I want to create a Honeybee routine that samples a space with using this spherical sensor, how would I go about doing that?

If your interseted, here is my circumstance and how I’m currently gathering results.

Currently i’m modifying the pts vector in the grid analysis to always look up (+Zvector), rather than normal to the test surface in order to get a better result in section (otherwise they’d be shooting out into a wall). It still is only sampling as a hemisphere. The hemisphere makes total sense when evaluating the performance of a work surface, and room performance, but when evaluating an atrium in section, light comes from all angles, especially if the building opens up to side rather than just from above.

In this image you can see the atrium has a bright spot part way down. This is from an opening in the building, and with hemispherical sampling, your not getting a complete picture of light behavior.

Here is the example file of a project I’m working that I’d like to get spherical sampling working on.

Hi Will, Thank you for posting this as a new discussion. I can get back to you with an example at some point next week but if you want to give it a try yourself the trick is to generate series of vectors in all different directions (instead of only one looking +Z), and use them to generate new set of test points (which will have the same XYZ for point and different XYZ for vector). Then run the analysis and average the values for same test points. This should give you what are you looking for.

-Mostapha

This answers the question of weather or not I’m modifying the radiance settings, or manipulating data post run to achieve a spherical sensor. Sounds like I’m running two hemispherical simulations and figuring out how to combine the data.

That makes sense except for one thing. Why would you average them? When I run the simulation in the two different directions… I get a high value (looking up) and a very low value (looking down)… if I average them, that would indicate that the high value was not accurate to begin with.

I would assume that I would add the values together, that is, if the sensors are really “searching for light” and just additively gathering light from its project rays. Is there some sort of sophisticated method of combining the ‘found’ light from the rays of a sensor that would effect how I would combine light coming from these two directions?

I think ultimately this is where I got stuck. I didn’t want to present anything as predicted light levels of the atrium in the analysis due to this ambiguous understanding of the methodolgy.

I hope this makes sense. Thanks for you wonderful help and dedication to this plugin… it is very exciting for us architects.

Any luck looking into this?

Hi Will, Here is what I would do to cover the full sphere. As I mentioned before if you just have two z vector up and down you are covering the sphere but the more vectors you generate the higher resolution you will have.

spherical_sensors.gh (447 KB)