Direct sunlight hourse not working with meshes


There seems to be an issue with the DirectSunlightHours component returning different results if the input geometry is a brep or mesh. (see also here)

For a brep, it works as expected…

However, for a mesh, I get this…

All of the normals of the mesh are correct

The reason I want to create my own mesh and not let Ladybug do it, is because it would seem that Ladybug welds vertices when joining meshes. This means if I have two surfaces that touch, it gest joined into a single mesh and disjoin mesh doesn’t work downstream so I can’t isolate results.

I’ve tried with individual meshes and also one joined meshes but the discrepancy in results is the same.

The problem looks like that the “_offset_dist” input doesn’t work on meshes and therefore the analysis points might be self-occluded by the mesh… Is this correct?

Any idea what is happening and how to fix it?

@chris here is some more information about what is going on. Feeding in a mesh gives very different results than a brep.

Updaing to version 1.2 seems to have fixed the issue.

1 Like

Yep. This was a bug in version 1.1.0. The _offset_dist_ input is now fully-functional for meshes.

1 Like

@chris why is the ‘_grid_size’ a required input? If I feed in a mesh, should it just use the mesh faces? This is how the legacy component worked I believe.

That’s a valid point since you are correct that the _grid_size isn’t used when your input geometry is all meshes. However, using meshes for the _geo is traditionally the exception and not the rule. And new users really need to be aware of the _grid_size because it has such a dramatic impact on the quality of results and the speed of the calculation.

So I hope you don’t mind the fact that you have to plug in a dummy value for the grid size when you are using meshes like an advanced user. There are just times where we need to give priority to a more streamlined workflow that educates new users about the component.

That’s OK. But on a related note, let me explain why I was using meshes to begin with…

When running a direct sunlight hours analysis, ladybug takes the brep, meshes it based on the grid size and then joins the mesh into one large mesh (if I understood correctly. Or is this just because the inputs are flattened by default?). To get results on a per brep basis, you then need to partition the results. Typically this can be achieved by ‘disjoin mesh’.

However, when using in conjunction with Revit, I am extracting room geometry and using this as my analysis surface. Sometimes rooms are separated by a room separation line, not a wall. This mean that adjacent surfaces are touching. Feeding in these breps into Ladybug converts them into a mesh which can’t be disjoined accurately.

Without doing super heavy collision tests to isolate the results, the only way I’ve found is to begin with meshes as the analysis geometry so that I know how many faces there are in each so that I can partition results downstream. Is this the best way? Or am I better to graph the input breps instead?

This really sounds like something that data trees should be able to do for you. Does grafting the input Breps for _geometry not give you the desired output? Granted, I know grafting will give you a separate legend for each Brep but you can standardize the legend range with LegendParameters so that the colors are all consistent. More importantly, it should preserve the data structure in the output data trees.

@chris the issue seems to be with the LB deconstruct matrix. If I graph the analysis geometry I get multiple sun intersection matrices - one per brep with the number of items relating to each mesh face created. This seems correct.

But when I deconstruct the matrix I get strange results. For example, in my case with a mesh of 285 faces and 25 vectors I should get a nested branch with each list length is 25 items.

Is this correct or am I missing something?

Yes, I see the issue. I wrote the deconstruct matrix component to only ever output a 2D matrix but you want to process a 3D matrix. Let me see if I can make a change that will preserve the extra dimension so that you don’t have to partition the list.

Ok, I just pushed some changes to the development version of the plugin that will allow you to keep the input data structure when you plug in a list of intersection matrices like so:

So this should provide you with what you need to process the 3D matrix of:

(sun_vector x sensor_point x brep_count)

… and should solve you issue.

1 Like

Thanks @chris

I downloaded the latest version. But is this correct? Why is the Deconstruct Matric returning more branches than the inputs I fee into it?

Don’t simplify the input to the “Deconstruct Matrix” component. This input needs to be flattened for it to work in the way you intend it to.