that’s odd, no? For sure there must be something wrong with my original geometry, but I’m surprised there are no warnings or errors thrown anywhere along the way?
It seems to think one surface is not Planar, but GH thinks it is?
Is there perhaps some way to force an error to show up if the geometry there is an issue with geometry like that? I think it would be nice to get that kind of warning so I can know I need to go back and fix the input geometry.
Yea, you have some invalid Rhino geometry there that was created at a different tolerance than the current Rhino model. Or I guess Rhino doesn’t really label the geometry as invalid or non-planar when you check its properties but it is non-planar because trying to recreate a planar Brep from the vertices using Rhinocommon methods will fail unless you ignore the current model tolerance.
Running an extra check for planarity is always do-able after creating the Aperture since there is a core library method for it. But I’m a little worried about always running it and increasing the runtime, particularly for Radiance models where there are potentially thousands of geometries passing through the component. I imagine that it’s for similar performance reasons why Rhino does not run through all of the geometry of the model and check if it’s no longer planar or valid after changing the model tolerance.
Granted, if you want to know if all geometry in your model is planar, you can always run it through the HB Validate Model component. That will check all geometry across the entire model for any non-planar geometry at the model tolerance. Could this work for you or do you think we really need to take a performance hit in order to always perform this planarity check as the Aperture is created?
Understood on adding extra checks slows things down, and that the HB Validate solves a lot of these issues. For sure I think integrating that Validate component into all our definitions moving forward makes a lot of sense. Personally, our models are already big and slow, and so a few extra cycles to check and validate these types of things wouldn’t make me sad It took a lot longer for me to diagnose why we were missing a bunch of windows all of of a sudden.
I guess I kind of do think that if planarity is an important requirement for an LBT face, that should be checked during creation though? Actually, now as I think of it, using the Planarize Brep before sending to the Aperture components would probably be the best move then, right? We can add that in since we don’t mind the extra time, but others can leave it off if they don’t mind non-planar geometry, or if they are already certain it will never be non-planar? I
Definitively though do think that ‘Planarize Brep’ component should raise an error if it can’t be planaraized, rather than just output None? Shouldn’t it?
I wish this were simpler, @edpmay , but this is really more of an issue with decisions that McNeel made with Rhino.
You are getting a null there because the Rhino geometry was created at a different Rhino model tolerance and is no longer valid in your current Rhino document. If you were to open a new Rhino model instance and change the tolerance of the model back to the one where the geometry was created at, then everything would work correctly like it does on my machine here, including the planarize Brep:
I could write a component to tell you all of the cases of these invalid Rhino geometries given a particular tolerance but it’s really not practical to have every component check for this case every time because it should not happen if you stick to using the same model tolerance through the life of the geometry. You’re just really weighing down the main use cases in order to address the edge case of users deciding to change the model tolerance.
If you really wanted it, I guess we could add a check just in the places where we already handle failures. For example, we could put the check in a component like HB Add Subface such that, if the non-planar geometry fails to be added to a parent, then we can check if the planarity a the current tolerance is the problem. Would that address your concern here?
ah! That makes sense then - thanks for taking such a close look.
And yeah totally: understand about not larding things up to check for some edge-edge-edge case. No need to go adding a ton of checks and what not for something like this.
I think I am understanding the failure point here at least - we’ll see if we can come up with a good internal QA/QC process for these kinds of things in the future.