Result of PMV calculator is different from that of CBE Thermal Comfort Tool

It seems the result of PMV calculator is different from that got from the CBE thermal comfort tool.

I recall that the PMV calculator component is using the same code as the CBE tool for calculation. Then, why is this discrepancy?

Appreciate your advice!

PMV.gh (442 KB)

According to the comfort_models.py source code on:

https://github.com/CenterForTheBuiltEnvironment/comfort_tool/blob/m…

… the output for:

print comfPMV(27.5, 27.37, 0.18, 61.14, 1, 0.3, 0)

… is:

[-0.059644171197520995, 5.073652266813937]

… and the output of

print comfPMVElevatedAirspeed(27.5, 27.37, 0.18, 61.14, 1, 0.3, 0)

… is:

[-0.14102300772003976, 5.412044629473883, 24.866353217014876, 26.82930735799878, 0.6706926420012209]

So, now we have a group of different PMV values calculated from different methods for the same input variables:

I’m quite confused here, and not sure what’s causing the difference and which one is the correct one…

Appreciate your advice!

Thanks!

Maybe i’m missing something here from your explanation, but the results table looks completely similar for me.

Are you worried because -0.0988 is different from 0.03? They are the same (not even 1%). Also SET values are the same. The differences can be caused because the round function or any other similar reason.

From what you sent i find a very good correlation between the tools!!

-A.

Thanks, Abraham!

The main difference I see is the difference in sign of the values, i.e. negative value indicate a cool feeling and positive value indicates a warm feeling, which is quite different conceptually in thermal comfort study…

The value “-0.141” is about 42% “smaller” than “-0.0988”, so the value difference between them is not small…

If we have different results, such as -0.55 and 0.49, for the same inputs, how do we determine if it is within the neutral zone? and if such instance accumulate in large scale survey, the conclusions will be problematic.

Bottom line: if the same algorithm is applied for the code of the methods mentioned above, why the results are not that “similar” in terms of absolute value and the sign of the value?

I think you are wrong saying that the % difference between values is what is making the result more or les reliable. Take the result as is. -0.141 and -0.0988 is not even a 1% of the significance of the result. You are looking for PMV (or any other for that matter), you have to compare in relation to the possible values on this measuring scale and not in relation to the values obtained. At least not in the range of values you get. A difference of 1-2% can be acceptable, because of the reasons i mentioned before.

-A.

Grasshope,


While I am glad that you have posted this discussion, it is not a critical error and thank you, Abraham, for making this clear. The ever-so-slight difference in PMV was resulting from Ladybug’s use of the CBE code from a year ago, which used a ‘still air threshold’ of 0.15 m/s. The CBE group seems to have recently changed this variable to be 0.1 m/s (https://github.com/CenterForTheBuiltEnvironment/comfort_tool/blob/m…).
The ‘still air threshold’ is the air speed that is used to determine whether to use the original Fanger PMV model (which was developed with climate chamber surveys at low air speeds) or attempt to compute a ‘cooling effect’ from increased wind speed by running iterations of this PMV model until the results align with the SET calculation. As I understand it, the still air threshold is not a part of any building code and (like many of the coefficients and terms in the PMV model) it’s a fairly subjective value that is there to make the comfort models align with each other or align with climate chamber survey data. I don’t know what prompted the CBE group to change this value but this example points to a larger issue with PMV, which is something that I WOULD deem ‘critical’ in a broader scientific research sense.
Namely, we should be much more concerned about whether PMV is a good predictor of real-world occupant comfort votes than we should be concerned about getting all of our coefficients of our theoretical comfort models to be the same. For example, the real-world occupant surveys that were used to build the Adaptive comfort model found PMV to be more poorly correlated to occupant comfort votes than the simple operative temperature (or globe temperature). You can see the R2 values of these metrics to comfort votes on page 106 of this very comprehensive book.

So, if we are going to debate the accuracy of PMV calculations, I would much rather have us discuss this deeper, systemic issue of the PMV methodology as it relates to real people in real buildings. Or, if we are going to talk about differences of decimal places and subjective coefficients, let us couch the issue within the broader question of PMV’s accuracy.

All of this said, I have amended the ladybug code to align with the most recent CBE code so that you now get matching values between the two. I hope this helps and thank you both for starting the discussion.

-Chris

1 Like

Thanks Chris for your thorough response.

Another question can be the fitness of ANY comfort system/approach/scale to different local conditions (climatic, cultural, …). Many of those systems were developed for locations where research was/is done (US for this matter). So what happens when those results are adopted to places where different cultures/behaviour/etc are much different from the original.

This is a different range of question, but just wanted to make notice of it.

-A.

Thank you very much, Chris and Abraham, for the detailed clarification!

I understand the that there are discussions among researchers about the utility of PMV model as compared to the adaptive model, especially for non air conditioned, naturally ventilated space.

Our own thermal comfort survey in tropical context also indicate that:

  1. people felling thermally neutral (thermal sensation vote = 0) tend to be predicted as feeling slightly warm as indicated by PMV value calculated, and

  2. the adaptive model (ASHRAE 2013 ) seems to provide a close predication on percentage of people feeling thermally neutral than that indicated according to the PMV values calculated, comparing to the percentage based on people’s actual thermal sensation vote in the context of non-AC, hybrid or naturally ventilated spaces.

Nevertheless, I think only through continuous empirical study can we further validate which model is more appropriate under certain climatic context.

Thank you, too, for revising the PMV component!

Cheers!

Hello everyone,

This is a very interesting discussion, even though it started with a mathematical discrepancy.

To add to what you were saying I can convey my experiences from Malaysia, the tropics.

Most indoor spaces in Malaysia are indeed pretty close to what we call a chamber. By that I don’t obviously mean they are actually air-conditioned chambers but mechanically conditioned buildings with no operable openings (this is the norm in the tropics unfortunately).

There are of course limited areas where a mix of spaces exists, consisting of NV corridors and atria intermingled with air conditioned shop lots (as in the case of big malls for example). To add to this complexity, as Abraham mentioned, there are distinct cultural differences in people’s norms, routines, and behaviors which have concrete, material results that impact thermal comfort (e.g. clothing where one part of the population is heavily clothed while the other is in opposite lightly clothed). To make things even worse, I have personally witnessed on a daily basis a very interesting result of the thermal comfort study Chris linked, that people in the tropics prefer their environments to be below neutral (i.e. slightly uncomfortable, that is cold). A way for integrated thermal comfort strategies then is indeed very difficult, if not impossible under these conditions.

I work a lot with building certification and in that area most of the discussion here is irrelevant. Simulations and studies that we so easily are able to perform with HB/LB (and we thank you for this) are indeed very rarely used. The typical way thermal comfort is assessed in various green building tools is a simple range of temperature and humidity that is deemed comfortable (probably extracted from the same experiments that gave us PMV).

Interestingly enough it seems, at least from the table Chris posted, that the approach of using temperature (and/or humidity) might be more correct than using ranges of PMV. But what are exactly the ranges of comfort? And more importantly how can we influence these ranges? How can we finally avoid the standard approach of all practical desing methodologies that “design for the worst case”, which results in suboptimal solutions?

I have been thinking of these things for a while. I think it is a critical aspect of what I am doing even though the industry really doesn’t give a damn. I have a kind of intuition that thermal comfort could be assessed and improved, at least here in the tropics, with design interventions that are neither inside (indoor climate) nor outside (outdoor conditions), but in the interface between them. I feel the impact of high gradient temperature changes to people’s comfortable ranges should be properly assessed. One strategy (i.e. PMV) will probably always be suboptimal.

As a final note, perhaps another way to deal with the complexity inherent in these studies is to introduce more complexity! By that, I mean introduce adaptive comfort strategies in the buildings. In this way you have a local adaptation instead of a single (suboptimal) adaptation. But then, again in the real world, it is a challenge to convince Clients and Designers that the spaces will be indeed comfortable. That is why I think workflows like the one Chris has provided us with his research are extremely important in the ‘effort to convince’.

When I get the time I will try to contribute what little I can by making a few CFD studies analysing the impact of these ‘thermal transition spaces’ in buildings. But short of real-life experiments which could calibrate various assessment models (ABM anyone?) I can’t think of any other way we could practically assess this. Perhaps someone already has done so? Hell, maybe it’s already in the Adaptive Comfort book which I’ve only skimmed through?

Anyways, long winded. Very nice discussion indeed. I appreciate all your experiences and expertise, they really help stimulate discussion and ideas.

Kind regards,

Theodore.

Hi @TheodorosGalanos,

picking up on this past interesting topic, have you stumbled upon any news on the side of tropical standards for comfort ? Having a hard time analyzing warm and humid climates and setting a meaningful adaptive comfort band.

best,
Olivier