LBT - LB_IncidentRadiation vs HB_AnnualRadiation recipe differences

Hi,
I’m trying to compare those two radiation analysis. The range of values is similar (not equal), which is good.
But when looking at the patterns of the radiation on the surfaces i can say that there is definitively something weird with the HB_AnnualRadiation. See images below.
I’m not playing with the radiance_par input (just using defaults) but for sure they can’t affect those patterns significantly.
I can’t guess why they are happening, so the only thing i can do, for now, is to report.

Thanks,
-A.



Radiation_Bug.gh (83.6 KB)

1 Like

Hi @AbrahamYezioro . I have revised a bit on your script and the results look similar and promising now. Just kept the Quadonly option to false (to keep the number of test points same in both cases) and a bit of radiance setting (-ab to 1) .Radiation_Rev.gh (82.3 KB)

Thanks @Asisnath,
The -ab 1 really makes both analysis similar (direct+diffuse, no reflections). This was my mistake.
But … changing ab to a different value can’t produce such radiation patterns. Start with the roof of the high buildings. I expect it to receive the same value as ab 1. On east and west faces you get a diagonal areas and values that is opposed the context faces and don’t make sense.
I changed the materials to low reflective ones (plastic 0.35) thinking that maybe the default ones could make some difference. No help either on this. Same results.

Will be glad to get why this is happening.

-A.

1 Like

@AbrahamYezioro I agree to you. I think its about test point that we are considering.once i had set to false for quadonly the results were identical.

Hi @Asisnath,
I didn’t get what you are saying about the effect of the quadonly. It is just cosmetic and didn’t change the essence of the result.
In the meantime i added to the file the HB_Legacy radiation analysis. It is in clear accordance to the LB_IncidentRadiation and makes me wonder even more about what is happening with/in the HB_Radiance.
@mostapha, @chris, any insights?
-A.



Radiation_Bug_01.gh (594.9 KB)

Hello @AbrahamYezioro and @Asisnath, I am also really interested to know what is going on here. I was comparing as well RAD recipe, specifically HB+ and LBT 1.2.0 recipes, and I faced some wired results by picking the same HOY and compared the total radiation and the visual pattern on the south elevation of a test building in a shaded context. From what I have understood, it looks like there is a shift of 30minutes in how the two versions calculate the same HOY, but I would like to know why (is it a bug or is done on purpose?) and how this is affecting the comparison between results obtained in the past with HB+ and the ones with the latest release.

Here’s the link to my post where there are more information about and the .gh file if someone wants to have a look: Different results for same HOY when comparing LBT 1.2.0 RAD recipe to HB+ RAD recipe

Maybe my problem has some relation to yours, or maybe you or someone else can explain to me why this is happening. I’m looking forward to a solution/explanation for both these issues,
bye for now and thanks,
Matteo

Hi @Matteo,
Not sure both issues are related. Just checked your post and answered there what i think happened. There, at least, the results make some sense.
In this post the resulting radiation pattern from LBT is not, for me.


I did not re-run by turning off quadonly. But it seems to be creating this disparity between the results as @Asisnath pointed out. The location of test points is different. Also, the vectors of the test points are not all outward-facing with the quad only option. This is something we should look into.

An update on this issue.
Testing just one single surface seems to give consistent results. More than one surface starts to create the weird radiation patterns.
Still don’t get why, but at least starting to understand when it happens.
-A.

Thanks @devang,
Indeed the normals are not consistent using the quads on. This happens when you use the box as a whole.
I tried exploding and analysing faces. Then the normals are fine but still the results are weird.
This is a pickle.
-A.

That’s new information. Thanks @AbrahamYezioro

This seems to almost certainly be an issue with the quad_only grid option and the normals it produces. I’ll look into it now. Everything looks fine no matter what Radaince Parameters are used (as long as quad_only is False or the default.

Yep. It’s a bug in the ladybug_geometry method that is joining the individual mesh faces of the box together. I will push a fix soon but another way that people can avoid this bug (if they want to use the quad_only option) is just be exploding geometry before they pass it to the “LB Generate Point Grid” component.

The fix has been merged:

It will take a little more than a hour before the it works it’s way through the CI and will be available through the “LB Versioner”.

Thanks for reporting and debugging @AbrahamYezioro and @devang

Hi @chris,
Thanks. The fix indeed makes the normals point to the right direction. Unfortunately the results are still wrong for the Quads On option (the quads off are fine now).
Exploding the breps with Quad On seems to work (but not ideal option).
Right now i’m more clueless why this is happening.
Quads Off:

Quads On:

Quads On+Explodes Breps:


-A.

It looks like it’s taking a while for the continuous integration to update. I suggest checking again tomorrow.

I have good news and less good:
The good ones: It is working now!! Thanks for the solution @chris.
The less good: At this point of the day, even though the Versioner completed the update process, it is not doing it to the last versions, at least not to the update mentioned above. I edited manually the file (mesh.py) commenting the lines described in the link above. Of course i’m not happy doing that but somehow the versioner is not getting the update done…
-A.

Yes, Sorry Abraham. I don’t know what has gone wrong with our Continuous Integration system. There must be a server outage somewhere or maybe something went wrong during the maintenance of the services we use. If it’s not fixed by tonight ET, I’ll do the update manually.

It looks like a couple of the cloud-based services that we use to test the software are down at the moment (at least on the east coast of the US for open source projects). As I once heard someone day “this is the closest thing to a snow day in software development.” If it really comes to it, I can always go back to the stone age and run all tests and send all PRs on my desktop. But it’s more likely that we just wait a day or two and ride it out. I’ll let you know if the fix gets merged into what you can get with the LB Versioner.

No worries @chris,
Nothing critical on those updates so you better use your time on something more useful.
The snow day will pass and the sun will shine … :slight_smile:
Thanks,
-A.