Discussion of New Comfort Mannequin Features in Ladybug[+]

Wonderful community,

I am starting this issue to gather some thoughts on how we should implement the thermal comfort mannequin in Ladybug[+] since I will be be adding this in soon and it would be good to know what features should be designed in and what use cases people would like to use it for. Normally, I would post a topic like this in a GitHub issue but there are a number of questions here that could benefit from people’s knowledge of comfort literature and it would be good to have a wider community input. In particular, there are 4 things that I could use people’s thoughts on:

1. What should the source of the geometry be and how many geometry options should there be?

I was originally thinking of starting form the same comfort mannequins that we used in Ladybug legacy but I know that @AndreaZani905 has assembled a much better set of human geometries for his recent paper at the building simulation conference:
Annual Radiation Discomfort: A New Climate-Based Framework For Modeling Short-Wave Solar Radiation In Indoor Spaces
Andrea Zani1, Henry David Richardson, Alberto Tono, Stefano Schiavon, Edward Arens
BS2019_224_1_210338_Zani_2019-06-27_06-16_a.pdf (1.0 MB)
I know that you also did a good job, @AndreaZani905 , in making sure that the surface areas of the geometries match one another and match the assumptions of the PMV model, which makes them well suited as a set. If you are able to upload and share the geometries here, @AndreaZani905, I think we can use your work as the basis of the new comfort mannequins in Ladybug[+]. Also, if anyone has any other thoughts on different types of human geometries that we should support, please feel free to share your thoughts.

2. How should the mannequin’s be subdivided for multi-node comfort models?

Even though it may be some time before we get complete multi-node thermal comfort models in Ladybug (like the Fiala model that was used to develop UTCI or the Berkeley Advanced Human Thermal Comfort Model), I know that we want to set up the comfort mannequins of Ladybug[+] to be able to inform these models. For example, you should be able to get the mean radiant temperature of the hands separate from the torso. I honestly don’t know much about the Fiala model so I had been planning to subdivide the mannequins according to the description of the Berkeley model in this paper. I am not entirely sure if the Fiala model follows the same standard of human subdivision but, if anyone has any insight on this topic, we would really appreciate your thoughts.

3. What file format should the geometry data be stored in?

The two options that really came to mind here are JSON and CSV. A hybrid approach between these two file types is also an option. Personally, I had been leaning towards a pure JSON representation since it is quick to parse in nearly all computer languages and it handles the hierarchy / subdivision of the different body components well (mentioned in the mote above). This said, I know that JSON isn’t always as human-read-able or editable as CSV but it’s also true that a list of vertices in a CSV isn’t going to be readable no matter what you do. In any case, I put this question out there in case anyone has a preference here or has ideas about how they might use the raw geometry data that influences this decision.

4. What methods for view factor calculation should be used?

For a while, I had been considering whether it was worth embedding certain types of view factor data with the mannequins (like unobstructed sky view factors for each of the geometry mesh faces). However, I think most of the cases where people want to apply the mannequin involve some type of shading and so information like this would just add to the file size without adding much value or calculation speed. I had also considered for whether it is worth using EnergyPlus’s View3D utility to calculate view factors between the mannequin faces and interior surfaces. However, I have been leaning towards using Radiance for all view factor and shortwave solar calculations involving the mannequins after understanding the insights of @sarith 's recent paper at building simulation:
A Critical Evaluation of Radiance as a Tool for Calculating Radiation View Factors.
Sarith Subramaniam, Sabine Hoffmann
BS2019_238_5_210617_Subramaniam_2019-06-24_17-27_a.pdf (811.0 KB)
@sarith , if you have any further recommendations here, they would be great to know and if anyone else has any thoughts in this area, please post them.

Thank you all!

@chris

This is all looking really good, it’s great to see how much more advanced the new thermal comfort features in ladybug+ will be. I generally agree with your reasoning here, but have a couple of additional thoughts on your last two questions.

/3. I second using JSON for storing the view factors. It seamlessly transitions between languages, and has the critical feature of being built for the web. I know that there’s a handful of us in the LBT development community that are already using the python modules with cPython (for numpy/pandas integration) and JSON is the best way to store, access, and port data in these cases. Additionally, it’s nice to have the comfort geometry features consistent with honeybee-json.

I also agree that the idea of embedding view factors with the mannequin file would serve just to unnecessarily increase the file size. I’d add that from the perspective of overall programming robustness, I think it’s generally better to rely on functions to compute additional object data, rather then assigning it as properties whenever possible. This reflects a more ‘functional programming’ approach rather then OOP-style, which tends to be more robust as it prevents mutations/side-effects caused by forgetting to update object properties.

/4. Based on my understanding of Sarith’s paper, View3d calculates all surface view factors involved in the scene, and doesn’t use faster probabilistic ray-trace method? Definitely seems like Radiance would be better.

In addition to being faster, I think it’d be generally useful for Honeybee to expose Radiance-based view factors as a separate function, as there’s a couple of non-daylighting specific uses for it. Thermal comfort in this case, but also it can be useful when generating 2d images from 3d scene geometries. Personally, I’ve always been intrigued by the idea of using view factors to automate the “decomposition and recomposition” of building geometry zones for daylighting and energy simulation at scale as implemented by my old energy lab director at P+W: BIM-Centric-Daylight-Profiler-for-Simulation-BDP4SIM-A-methodology-for-automated-product-model-decomposition-and-recomposition-for-climate-based-daylighting-simulation.pdf. That link shows the paper for daylighting, but I recall there’s a paper floating around for energy model recomposition/decomposition as well.

I’ll leave it to the radiance/comfort experts to provide more insight into your other questions.

S

Thank you for the response @SaeranVasanthakumar and sorry that I’ve been late in adding my own thoughts so here they are:

  1. Glad to hear you agree that a pure JSON format is a good idea. And, since you bring up that having the JSON format consistent with what we are using elsewhere in LadybugPlus, I think we could just use the ladybug_geometry Mesh3D schema to store the geometry data. The only extra “structure” we would need to add to this schema is something to distinguish which body component (ie. arm, leg, torso) a given geometry belongs to.

  2. I know @sarith said he would weigh in here to clarify but I think the issue is a bit less about absolute computational speed and a bit more about choosing the right calculation method for what we want to use it for. My understanding is that the geometric calculations used by View3D are actually very fast as long as your geometry obeys certain rules (ie. view factors between simple polygons and few surfaces obstructing other surfaces). But I think many of the cases we want to test don’t really obey these rules, making ray-tracing more attractive.

To your second point about exposing radiance-based view factor calculations in honeybee, I know that we will definitely add some recipes for view factor calculations to honeybee_radiance at some point in the coming months. Thanks for posting the paper.

I’ll share a few observations that come from my work on the Arup Advanced Comfort Tool. I had a lot of contact with the Center for the Built Environment in making this tool, and I think there is quite a lot to critique about current work on thermal comfort.

1. All of the existing thermal comfort models that I’m aware of deal with a very limited range of body types (men weighing 72-74 kg). You can see the range of variation in Compilation of basal metabolic and blood perfusion rates in various multi-compartment, whole-body thermoregulation models. The lack of variation isn’t too surprising. All of the models are derived in some way from the Stolwijk model, which was developed for studying the thermal comfort of astronauts, who no doubt all had similar body types in order to fit into space capsules in the 1960s. However, the range of body types involved in subjective comfort studies has been much larger, and it’s not clear that the results of subjective studies correlate well with predictions from Stolwijk’s, Fiala’s, or Tanabe’s geometries.

2. The 16-body part model (head, chest, back, pelvis, upper/lower arms/legs, hands, and feet) model is the most common. Zhang further subdivided the head (face, scalp, neck, and breathing zone) for the 19-part model used by the Berkeley Advanced Human Thermal Comfort Model. There are some problems with these models, however:

  • The locations of the divisions between the body parts are arbitrary. Human subject tests apply sensors to points on the body and then assume that the rest of the part behaves the same as it does at the sensor location.

  • In reality, the temperature of a body part is not uniform. For example, temperatures on the back of the neck are related more closely to the back than to the front of the neck or head, yet the neck is generally included as part of the head. It would probably make more sense to divide the body in terms of the locations of muscle groups or fat deposits, but instead we tend to divide it according to the locations of articles of clothing.

  • Often, we integrate the results from studies that used slightly different body part divisions or sensor locations. That can mean assigning conditions from one body part to another when one of the models doesn’t give enough granularity. In this respect, it MRT calculations that divide the body into a larger number of parts are probably more useful.

4. You can calculate view factors with ray tracing a la Radiance, but this can be quite slow for large numbers of rays. OpenGL methods are much faster for this sort of thing.

Hi @chris

Over the past couple of weeks, I had several discussions with someone from a prominent consulting firm who’ve been using View3D for calculating view factors since 2010. Radiance clearly does much better with complex geometries in terms of speed. The point-to-surface view factor calculation that is possible through Radiance does not appear to have an equivalent implementation in View3D. There is also an issue pertaining to where the point is “looking” at (which in Radiance is handled through the ray direction vector in the pts file).
In the case of a multi-segment manikin with around 200-5000 mesh-faces, the ray tracing calculation is unlikely to take more than a few seconds.

@sarith and @Nathaniel ,

Thank you both for the very helpful responses. This has definitely helped inform what I plan to implement and just to summarize the key things that this feedback means for what will I plan to implement:

1. Your critique that nearly all thermal comfort models use a limited range of body types, which are biased towards men, is dually noted, @Nathaniel . I guess that the ideal solution would be to offer a range of mannequins for different body types and encourage everyone to use all of them. In the long-term, I think we could support that but, knowing that the vast majority of users stick with defaults, I think we might have the best chance at encouraging more inclusive design by making this default human geometry more “inclusive” or “averaged” across many body types rather than trying to support all possibilities. So I think we can try two things to attempt to accomplish this:

  • Take a critical eye to the more detailed mannequin geometries that we add and see if we can at least make them androgynous as we can.
  • Include some basic scaling operations that allow people to approximate the geometry of children and the differences in average body size that exist across countries.

Now that I think about it, I feel that these features should probably take precedence over trying to match perfectly what is assumed for models like PMV.

2. Those critiques of multi-node thermal comfort models are also useful to know, @Nathaniel . In this case, I think that I’ll divide the more detailed mannequins up into 16 parts as the Berkeley model does but I will try to lay the infrastructure for people to group these parts for models with less nodes or to further subdivide the mannequin if more sophisticated models that address some of your critiques come along.

4. Thank you for the suggestion of the OpenGL methods, @Nathaniel but I think it might be a little too out of the wheelhouse of the engines we use for at least the first version (we will definitely keep it in mind for down the line). At least for the question of Radiance vs. View3D, Radiance seems like the better tool for this particular job of modeling view factors over human geometries and thanks for confirming this @sarith. Also, for the point-based methodologies used by the microclimate maps, Radiance also seems better suited since View3D doesn’t have point-to-surface capabilities as you say, @sarith. The direction vector that gets passed to Radiance for these point-based studies is actually very helpful for point-based shortwave modeling using the SolarCal model since this model has separate terms of the irradiance falling on the person from above and that which gets reflected off the ground from below. So running two separate directions (looking down and up) is very helpful here. However, for longwave MRT calculations, we typically want to take out this bias of direction and just compute view factors assuming a sphere of equally-weighted rays emanating from a point. @sarith or @Nathaniel , do you have any recommended workflows to accomplish this with in point-based view factor studies in Radiance?

@chris,

A lot of my responses depend on what you want to do with this thermal manikin, and in particular whether the manikin interacts with it’s thermal environment (i.e. does it store and give off heat?) or is just a passive probe used to measure the environment.

1. The manikin is really a resistor/capacitor network. Each body part has thermal conductance, thermal resistance, and thermal capacitance. I’ve done some experiments with varying the surface area, mass, and fat content of body parts in order to simulate different body types. I suppose the geometry of the manikin could be parameterized to match the surface area. I believe Tanabe did some sensitivity analysis to manikin geometric complexity with in CFD, where the two manikins differed somewhat in surface area.

4. I’m not sure I understand what view factors you want to calculate. Are you treating the manikin as transparent, so that the you would sample a full sphere? In reality, each skin surface is only exposed to a hemisphere oriented normally to the skin. Using area-to-area view factors or Shirley-Chiu sampling from selected points on the skin (i.e. good old Radiance -I+ sampling) makes a lot of sense here. Also, do you plan to take account for the body’s view factor to itself? (The inside of the thigh is warmer than the outside - which is still not accounted for in most human thermoregulatory models.) I think this might be what @sarith was getting at with the 200-5000 mesh faces. I’m also not sure I understand why shortwave and longwave would use different sampling schemes. Is there a different granularity involved in each?

@Nathaniel ,

Thank you for sharing your thoughts and I apologize for the late response. To briefly share my thoughts:

1. Thats a good suggestion. When we eventually get to the point of integrating the mannequin radiation results with multi-node models, I think we should revisit this idea of synchronizing the surface area of the body parts (and the volume it encloses) with the thickness/resistance of the different tissues. For now, I think we may just focus on the use of the mannequin to inform radiant temperature studies and aim to support an “averaged” types of human geometry. Then, we can gradually work our way to higher complexity.

4. I realize that there are two separate issues here and only one of them is related to the title of the post so feel free to discount the second for now and we can discuss later in another issue:

  • The issue that is related to this post is generating view factors for each mannequin face to each surface surrounding the body (for long wave MRT calculations where we have a set of temperatures associated with each surface around the mannequin). For this, using Radiance as you and @sarith suggest seems like a good idea. You also bring up a good point here, which is how to handle the case that the body “sees” itself. For these cases, I was imagining discounting the view factor of the body to itself in the final MRT calculation by normalizing the view factors that don’t “see” the body by their sum. Admittedly, I don’t know if this is the best practice, though, or if there’s a better way to account for it by assuming the skin is at a given temperature by default. I guess the issue here is that it’s hard to estimate skin temperature without first having radiant temperatures to input to a multi-node comfort model creating a “chicken and egg” situation. Also, as you point out, these models aren’t really calculating the temperature of the specific body parts that see others so I’m not sure it’s really much more accurate. Let me know if the first solution I mentioned of discounting the view factor of the body to itself seems reasonable.

  • In cases where we want to build up high resolution maps of the MRT that an occupant would experience across a building, it’s not really practical to place an entire mannequin at each node of the map. For this situation, we can use simplified models like SolarCal for shortwave MRT calculation. The longwave calculation in this case can be done with a single sphere of rays cast from the center point of the body to determine view factors to each of the surrounding surfaces (view factor = intersected rays / total rays). I realized that because each point of a grid-based study in Radiance usually requires a direction, it isn’t clear to me if I can do this type of spherical view factor calculation without having to resort to spherical image-based studies. If you have any suggestions, please let me know.

@chris,

Just some comments on the 4th point:

4a. I would suggest that the body should “see” itself for the MRT calculation where there are concavities. Solving the multi-node network to get skin temperatures is generally an iterative process anyway, so iterating the MRT calculation as well doesn’t seem like a big deal. Fortunately, you don’t have to recalculate the view factors (unless the body is in motion).

4b. This would work for single node models, but I’m not so sure about multi-node models. Consider for example a map of MRT in a space situated between an infinite hot plane and an infinite cold plane (or if you prefer, an astronaut on a space walk). The MRT calculated at each point in the space would be halfway in between the temperatures of the two planes, but a body placed at any of those points would experience one hot side and one cold side. The thermal sensations calculated by the two methods are quite different. Perhaps there is some way to store view factors with directional components, rather like the daylight coefficient matrix in the two-phase method.

You can certainly do a spherical view factor calculation with Radiance by specifying ray directions directly rather than using the I+ option to create a directional sensor. I believe that @navidhatefnia has a few examples of this. I would not use an image-based calculation, as most projections oversample certain directions, though a cube map projection would not be too bad.

Hi @chris,

Sorry for the super delayed answer, have been a super hectic couple of months. I have attached the Rhino file with the three manikins that we used in my papers with the CBE. As you mentioned, the three manikins have a different resolution but the same body area. For a single node model, I think that Sarith method for the calculation of view factors to asses longwave MRT makes a lot of sense.

Let me know know if you have any questions.

Cheers,

Andrea

Radiation_comfort_Manikins.3dm (1.1 MB)