After weeks of waiting, I am feeling so excited to use the new version of LBT, HB and DF for my microclimate project at an urban scale (500 Meters). Thanks so much for your team efforts!
The issues that remain unsolved for me are to probe the surface temperature for 40K points over the ground in the city (the grid size = 2 meters). Since the legacy version can do the job with very limited computation capacity (It took 24 hours to run 10K points), I am looking forward to the new version that might make a difference in the simulation speed. However, I could not convert the legacy HB components to any in the new version, such as OutdoorComfRecipe. Here is my GH file that includes both the new and the legacy versions for testing the surface temperature of ground geometry in the example UWG file. LBT_HB_UrbanMicroClimatMap_MRT_UTCI.gh (1.4 MB)
Greatly appreciate it if anyone can work out to calculate the annual outdoor surface temperature, the MRT & UTCI at the ground level for a large-scale urban area.
You should look at the comfort_mapping.gh sample file that is included in the release if you want to see how to use the new comfort maps.
Also, there is a good reason why I said that the thermal maps in LBT 1.2 are a “Draft.” While you certainly can rebuild your Grasshopper scripts with the new components now, I think you’ll likely want to wait for the stable release of LBT 1.3 before trying to simulate a full 40K outdoor points in the new plugin. The reason is two-fold:
The new maps do not yet use the EnergyPlus surface temperature for longwave MRT of outdoor points. As the release notes say, the shortwave calculation of the new maps is much better than Legacy (thanks to Radiance) but I’m still working on using all that EnergyPlus gives us for the the longwave outdoor MRT calculation. This should be there in the next release.
There are a couple of changes that I wanted to make to how the last step of the comfort recipe is run in order to better parallelize the comfort model calculation. Right now, the comfort calculation of each radiance sensor grid runs on a separate CPU so you are not going to get that good performance if you lump all of your outdoor sensors into a single sensor grid. So, for the time being, it’s up to you to split up sensor grids based on how you want to parallelize them before you assign them to the model. In the next release, you’ll be able to use the sensor_count_ input to the comfort recipes to split the grids for not just a more parallelized radiance calculation but also a parallelized comfort model calculation.
Thanks, @chris when would the LBT 1.3.0 be released?
Finally, I decide to quit calculating 40K probes of surface temperature for MRT and UTCI, as my machine does not function as a supercomputer.
To minimise the computation cost, there is another legacy component LB Outdoor Solar Temperature Adjustor that maps the MRT and UTCI, only without considering the longwave radiation (if I am not wrong). It is much faster than any MRT component in HB, though I have not found any alternative for that in the new LBT 1.2.0 when trying to rebuild everything together.
The components currently available in the new LBT are called OutdoorSolarMRT and HumanToSky. I reckon that they won’t consider the sunlight and building obstruction, which have been integrated in the legacy component called contextShading.
For example, the two images below demonstrate the UTCI mappings in the morning (from 1 to 11 AM, 18 to 25 June) around the summer solstice, using the legacy 0.0.65 and new version 1.2.0. Obviously, the new version (on the left) doesn’t consider any morning-sun impact, for its UTCI patches are completely symmetrical on the east and west.
The Ladybug-only method will work if you only care about direct and diffuse sky shortwave (so no shortwave reflections and no longwave MRT beyond the computation of sky temperature). If it’s “much faster than HB”, than that is just because it is not as accurate and there are fewer things being considered. Also, the Ladybug method does not scale as nicely to multiple CPUs but maybe running it for just a week or a month for 40k points
I think your question about alignment comes from the fact that Legacy started counting hours of the year from 1 while the LBT plugin starts counting from 0 (the LBT plugin is better-aligned with real time, which goes from 0:00 to 23:00 and not from 1:00 to 24:00). So you’ll have to add 1 to the hours you plug into the Analysis Period component of Legacy if you want them to be aligned.
Hi Chris, I am currently looking into the same comparison for the UTCI calculation between the legacy and new LB. I would like to understand the difference in the MRT results between the two versions that is significant. Both simulations match time-step. SolAdjustMRT_Legacy_vs_New1.3_V2.gh (848.3 KB)
I changed the default assumptions input to the SolarCal model in the new LBT plugin and this might account for some of the differences. In Legacy, the human subject was always facing South by default but, in LBT, we have the human subject rotate such that their back is always partly to the sun (at a 45 degree angle to it). This new default assumption seemed more realistic since people usually don’t like to look directly into the sun.
Looking at your file, the MRT values out of the Legacy and LBT components are pretty close to each other:
The big difference in the UTCI values from the file that you uploaded comes from the fact that you connected wind speeds to the LBT UTCI component but you did not do the same for the Legacy UTCI component.
Hi @chris,
I’m currently also looking into analysing a large urban model with LBT/HB. I would like to generate a UTCI map. Is there a tutorial file that I could use as a basis? Currently, I’m running into the problem that the calculations just take forever.
Regarding the calculation time, would you recommend me Legacy or LBT and can you recommend me a maximum amount of analysis points?
Also regarding trees, would you suggest using low-poly trees or a single surface or is there a certain material (vegetation) for that?
The calculation time is not just dependent on the number of sensors but also on how complex your geometry is, what Radiance parameters you are using, and how good your machine is. So, no, I can’t recommend a maximum number of sensors without knowing everything else but it’s always good to start small and then you can scale it up later after you understand the workflow.
It depends on how important it is for you to match the exact geometry but I tend to use a single surface for the tree canopy and I just assign a shade transmittance schedule to it that mimics the overall transparency of the tree.