I ran into a problem when trying out Honeybee surface flux simulation. Can you please have a look at my definition and explain why this is happening:
It might not be infinite but I haven’t got beyond the initialization state when I input slightly more complicated breps than a cube.
flux_test.gh (182 KB)
Your file seems to have run pretty fast on my machine. I have the E+ simulation finishing in 38.3 seconds:
What type of CPU is in your machine (# of cores and GHz)?
Also, I can recommend the “Color Surfaces by EP Result” component for coloring your cube’s surfaces in a way that also produces a legend.
I can reproduce the same result with the cube. However if i crank up the geometry complexity (far left side of the definition a slider i set to 1 try set it to 2 - a sphere approximation with larger numbers) the simulation never goes beyond the init stage.
Hi Kristoffer, An sphere will generate so many mesh faces and EP gets pretty slow when you increase the number of faces but it should finish at some point.
Mostapha’s response is absolutely correct and, if you want more info about how Honeybee is meshing the sphere in order to be able to run it through E+, you should check out this video that gaives a full explanation:
The simulation will run much faster if you can find a good way to planarize your geometry first since the default meshing algorithms can create a lot of surfaces, which really slows the calculation down.
Hey Chris and Mustapha
Thanks for your replies. I now know that E+ have difficulties with non-planar meshes. I updated the file to approximate the a sphere with planar surfaces instead of non-planar smaller surfaces (mesh sphere ex is not the best choice i see). The initialization stage is now ~ 1 sec. I think (I’m not the expert here so correct me if I’m wrong) the problem has something to do with EP’s method to solve adjacent surfaces when parent and child surfs are non-planar.
What exactly is happening behind the scene when EP is initializing?
flux_test_planar-parent-child.gh (181 KB)
Your file runs relatively fast in my computer but it takes a lot to process the results after the simulation ends.It took like 3-4 minutes to process, but after that i get the calculations you are performing (yearly max flux (beam component)).
The IDF file has no issues with parent and child surfaces as you suspect.
E+ has no problem to simulate anything you give him. The adjacency is something that needs to be solved for the sake of “good” simulation results.
So, i’m not sure from your question if there is still an issue on your side or not …
Which of the two files 1) flux test or 2) flux test planar parent child are you referring to?
It might be my computer which is slow, but file-1) (when the slider to the far left is set at two or more) does not give me any results within the first 60 minutes of simulation. But file-2) on the other hand initializes in 1 second and simulates for 3 minutes or so and gives perfect results.
The reason i suspect non-planarity to be the reason of slowing down the simulations is simply based on some my own assumptions.
I used the last file you uploaded, which is flux_test_planarparentchild.gh, as is. Now i understand what you are asking and i see you also get the same results as i do.
My guess and experience will say the same Mostapha and Chris already said. But i add that it is not just the big amount of surfaces but also the quantity of windows you get. They cause an increase of simulation time.
Now i’m just curious what you intend to do with this?
OK I think I am the right path then.
Regarding your curiosity I hope the following makes sense:
To start with I just wanted to understand what was happening in EP when calculating energy fluxes. All this sphere-approx-thingy was only to get a sense of flux from any direction from the entire year.
However my final goal in this case is to approximate annual flux-rates through any types of transparent surfaces. Ideally I want real time feedback of thermal indoor environment - which I know is quite difficult and all sorts of other concerns than solar gain is of course very important. For now I only wanted to estimate the impact of changing solar intensity.
I made a definition that approximates maximum direct irradiance (only beam component for now) (heavily depended on Ladybug but no Energy+/Honnybee or other simulation-programmes is required). The idea is to make a critical sky component from weather data that simply computes max Watts from certain points in the sky. All surfaces with windows first goes through a simple shadow analysis (isovist/culled everything behind normals) and then computes amount of effect from each critical sky components. This of course need to be adjusted by the angle of incidence, the g-val (SHGC), window area and other factors. To do this I need to validate the concept. And I thought HB/EP’s flux would make it much easier to do. Now after I found the case with non-planar child-object it certainly is super simple to validate the heat- and beam fluxes.
I’ll post an update on my progress as soon as I get results and promise to share my files when they make more sense.
I will admit that I do not understand exactly what you are trying to achieve but it sounds like you may be trying to reinvent Radiance with native Grasshopper components. Are you aware that you can do basic radiation analysis with Ladybug and the validated Radiance functions? Ckeck out the 02_Radiation.gh and the 04_EnvironmentalAnalysis.gh example files here:
and these two corresponding videos:
As long as you are not worried about conductive flux through a window and you just want to look at solar flux, there’s no need to use E+.
I agree with Chris,
He get ahead of me by minutes with his answer. With LB or HB you have all options to get the amount of radiation (direct and indirect) hitting any surface and/or you can get the radiation at the origin (the sun) … or maybe something is misunderstood?
Don’t get me wrong here. I’m not trying to invent something radically new. And this thread has taken a completely new direction. I am as you know using the great tools Honeybee and Ladybug they provide all the means to what I try to achieve.
What I need is to get a fast (as possible) evaluation of passive heat/solar gain from a certain facade. I know my building can cool to a certain degree (lets say 80 W/m2 - now lets forget other internal gains) and I want to be sure my facade is not letting excessive amounts of heat into the room/building. Normally I would make a full blown simulation to count my overheating hours and thereby evaluate my facade. To speed up the process, the idea is just to evaluate overheating hours in a faster way. So what I am thinking is that excessive amounts may estimated by counting high intensity irradiation patches in a critical sky-component or whatever such thing would be called that surpasses my sensible cooling load. My hope is that any facade visible to the sky-patches would very similar to the number of overheating hours if properly calibrated to a simulated model. However I have no idea right now, if this can be done.
Why do this? Speed, convenience, whole building thermal analyses.
@Chris and @Abraham The critical sky-component is made with LBs radiance component radiation and filtering the beam-components with highest effects from a yearly epw-file.
@Chris Conductive heat gains are also important especially if the facade is badly insulated, so the next step is to filter the outdoor temperature parallel with that critical sky-component and then do a static heat transfer analysis and combine that with the effect from direct sun influence. Again, no idea if it works.
Hope it makes sense. I a little embarrassed I drew you into this little experiment. This was not at all the point of the discussion. But now we are into it I like to know what you think. If it works its kinda neat, at least i think it is.