Blinds/shades operation in sDA calculation

Found something in LM-83-12 for modeling exterior windows in sDA calculation. By default, all exterior windows MUST include blinds, and blinds close when 2% of test area gets direct sunlight.

LEED v4 inherited this requirement. I think in HB+'s sDA calculation, we should make it clear: it is sDA without dynamic shade. it is little bit different than standard LM-83’s sDA.

3 Likes

Yes! also please see answers in this thread:

1 Like

Thank you all! I will be working on this during the weekend. I’m thinking to make some major changes in how honeybee access the results for annual analysis and create analysis points first and then we can discuss about the blinds.

@MingboPeng and @Mathiassn in HB[+] you can set the blindStates which will include the dynamic blinds into the calculation. Also if there is any fixed shading it will already be part of the analysis. I’m not sure if I understand the limitation here.

I wish the people who came up with this very challenging metric could have provided a number of example files with the results which everyone can use as the base case. Based on my understanding almost every tool currently does it slightly different. [bunch of complains here which I deleted for now!]

Here is a well explained definition and calculation process for sDA: http://lightstanza.com/references.html

2 Likes

@Mathiassn and @MingboPeng,

I read both links and reviewed the IES LM-83-12 document. Thank you both for sharing your thoughts and resources.

We can automate the process of generating blind states with what we currently have implemented in Honeybee[+] however there a couple of design decisions to be made. There is also a concern about the Grasshopper performance when the number of analysis points starts to be more than a couple of 1000, and the number of window groups are more than a couple of dozen.

Honeybee[+] calculates the direct contribution of sun separately from the diffuse sky which means there is no limitations for calculating the times when more than %2 of analysis points in each grid receive more than 1000 lux from direct sun. The question is how to calculate the blind combinations effectively.

Currently you can input/design the blind states when creating the case using window groups and blind states but I’m thinking to add another scenario which let you test the scenarios of desired dynamic blinds and study what will happen if you block a certain % of direct and sky/diffuse contribution. It seems to be more inline with what IES LM-83-12 section 2.2.7 describes:

“If BSDF data cannot be used, and the windows have fabric shades or curtains, model the shades using
a combination of specular and diffuse transmittance. The specular transmittance should be equal
to the openness factor of the fabric, while the diffuse transmittance should be the total visible light
transmission (VLT) minus the openness factor. If the VLT is known and the openness factor is not known,
model the VLT as diffuse transmittance only. If the shade VLT is unknown, model the shade using 5%
diffuse VLT with no specular transmittance.”

“If BSDF data cannot be used, and the windows have white louver blinds (>80% reflectance); use a 20%
VLT diffuse distribution for both sunlight and skylight. The VLT of darker blind colors shall be depreciated
proportionally, to a lower limit of i 0% diffuse VLT for black blinds (reflectance of 0%) for both sunlight and
skylight. Thus, blinds with 40% reflectance should be modeled at 15% VLT, while 60% reflectance should be set at 17.5% VLT, etc.”

Finally for addressing the possible performance issues for loading data, and several other reasons, we will be pushing the data from the analysis to a database which will make these calculations possible efficiently.

Let me know your thoughts and if this satisfies what you’re trying to achieve.

Hi Mostapha,
Sorry for late reply!

I might be a bit unclear here as I’m reading your post on the phone in the train.

However, I think you are well in line with what the LM states, question is perhaps how we can maintain performance. Will it be possible to automatically split up a simulation if too many window groups and or too many simulation points? Can we graft input data and have branches of rooms fit the branches of nearest windows?

Regarding shading materials, I think that trans materials and BSDF are really good choices.

Do you have an example file where the blind schedules are automatically generated and applied in the sDA?

I did something similar with HB legacy where I created a csv occupancy file for each room and then shuffled through results with and without blinds, was a heavy task computation wise.

Again,
Thanks and good job!
Might see you in London soon :slight_smile:

Hi @Mathiassn,

Yes. This is where we are heading with all the new changes. There are a number of opportunities and we are trying to address all of them at the same time which is one of the reasons that it takes a long time to implement! I try to break them down here:

  1. Simulation for each window group can be simulated separately and in parallel. In 3/5-Phase simulation it is double since view matrix and daylight matrix can also be executed in parallel.

  2. [In case of 2-Phase simulation] Simulation for each state of each window group can be simulated separately and in parallel.
    See this presentation for 1 and 2:

  1. Not all the points/sensors are related to all the window groups. Mapping points to window groups can potentially make the whole process more efficient. The side effect is that we will end up with so many more matrices for each analysis.
  1. There are so many hours that we already know the results will be 0. Calculation of those hours don’t take long but they take a lot of space and make the files larger which affects the rest of the process for accessing the results. We can have a process to remove them from the process.

Even when all the above is implemented we cannot rely on Grasshopper/Dynamo for loading any of these results. For each test point, for each window group, for each state we will have 3 * 8760 values. That’s why for using these methodologies to work in the scale of full buildings we need to:

  1. Be able to break down the whole analysis to smaller pieces (the fancy therm is micro-services).
  1. Minimize the overhead to access the data which is where database comes to play.

https://github.com/ladybug-tools/honeybee-server-daylight/issues/4

I have been reluctant to create one since with the current development the solution is no scalable for the reasons that I discussed above. It will work for 100s of points but will be slow for 1000s and probably won’t work for 100,000s. I’m waiting to implement the database. Once the database is implemented I will create a sample file. Not to say that there are workarounds and simplifications for the whole process but that’s not our development philosophy. We need to get it right first and then will provide options to run the faster version if it fits their needs.

That would be great! Then I can show you some of the API goodies and the possibilities to make your own logic. It will be much easier to do in person. :slight_smile:

3 Likes

hello… may I ask if bloinds issue with SDA analysis was solved in the latest version or if there are any sample files … thanks in advance

5 Likes