Ibby
March 2, 2018, 8:08am
1
Hi folks,
I am running an annual daylight study using honeybee+. It is taking too long and my computer says dctimestep is running. Is it possible to run dctimestep in parallel? I have access to a linux workstation could dctimestep be run on that computer? if yes, can anyone share a guide on how to save the file and open/ run it on a linux system?
Thank you
sarith
March 2, 2018, 9:47pm
2
Someone had asked this question on the Radiance mailing list a while ago. You can find my reply here:
https://www.radiance-online.org:447/pipermail/radiance-general/2017-September/012260.html
Are you running an image-based simulation?
Ibby
March 2, 2018, 11:28pm
3
Hi,
Thank you for helping me on this.
No, I am only running a grid based daylight simulation in Honeybee[+] with HoneybeePlus_Run Radiance Analysis component.
Unfortunately that was the only component which, although iffy, would read data from about 900 points. I tried the three point and five point components, they would write the ILL files but would throw an error when reading them.
The HoneybeePlus_Run Radiance Analysis component also would throw a read error if I tried to push slightly more points. It was very finicky. Maybe I need to split the floor plan into parts to run the annual daylight analysis.
Thank you,
This is a known issue
Ibby:
Unfortunately that was the only component which, although iffy, would read data from about 900 points. I tried the three point and five point components, they would write the ILL files but would throw an error when reading them.
This is known issue and why we’re implementing a database to Honeybee[+]. You can read more here under upcoming release: Honeybee[+] 0.0.04 for Grasshopper, Ladybug 0.2.0 and Honeybee 0.1.7 for Dynamo Release
and here:
opened 08:46PM - 08 Jan 18 UTC
enhancement
critical
radiance
honeybee
backlog
## why? aka what is wrong with the current workflow?
There are currently two me… chanisms to input the results of a daylight analysis:
1. In the initial design which is currently used for most of the recipes other than `AnnualDaylight` the results are imported for each `AnalysisPoint` from every window group, for each time step and separate for total and direct contribution. In other words if there are 3 window groups in a scene for a daylight coefficient study each point will have 3 (for each window group) * 8760 (for each hour) * 2 (direct vs total) integer values associated to it. Now add a couple of dynamic blinds and a study with 2000 test points and you can imagine it will be a lot of data.
Inside Python itself that's no problem. It brings a great level of flexibility for post processing and applications like blind controls, lighting controls, etc, but when you load these data from inside Grasshopper and or Dynamo [it slows down the whole interface](https://github.com/ladybug-tools/honeybee-grasshopper/issues/10) for reasons which is out of my control.
2. The reality is that most of Honeybee users don't really need all those data for post-processing and what they really care about is a quick calculation of annual metrics such as `Daylight Autonomy`. For this reason Honeybee has an alternative solution which reads the results from the results file, calculates the annual metric and keeps none of the initial data. This workflow addresses the issue with slowing down the process but limits the level of flexibility of post-processing data. I currently limited this method to a single window group so I didn't have to access several files for calculating annual metrics.
Opening and closing multiple files thanks to iterators is very easy to do but is not the most efficient way to access the data. Specially when you're dealing with gigabytes of values.
## What if we keep the values in the file but point each analysis point to the correct byte to get the results?
This is something that I originally tested before implementing the current design and realized that it can easily fail when you start moving between operating systems. Also any changes in the results files by user will break the whole idea of being efficient as we have to redo all the remapping of bytes to results and now think about having a number of analysis grids and dynamic blinds to understand what kind of remapping can wait for you after a simple edit.
## New proposal
The new proposal is to dump the results from the analysis into a sqlite database and rewrite all the post-processing functionalities as sql queries. `sqIite3` comes with the Python installation which is helpful to avoid an extra dependency and installation issues.
I tested the idea during the weekend but didn't finished it. It will result in several edits in `AnalysisPoint` and `AnalysisGrid` classes and we need to make changes in the recipes to reverse the current results structure.
This workflow adds an extra time for creating the data base for `total` and `direct-sun` values but then saves us a lot of time for post-processing the results.
There will be a separate table for each result file where rows are hours of the year and columns are results for each points. For studies with single window group the analysis is pretty straight forward while for studies with several window groups and blind states one can use `JOIN` the tables in a single query.
Table: windowgroup..blindstate..total
. | pt_1 | pt_2 | pt_3 | ...
--- | --- | --- | --- | ---
hour1 | 0 | 0 | 0 | ...
hour2 | 0 | 0 | 0 |...
hour3 | 0 | 0 | 0 |...
One main critique to this structure will be the change in the number of rows based on the number of test points but for what we need that will be fine specially if it is well documented. We will also include methods to generate the original radiance files from the database in case one needs them afterwards for any reason.
1 Like
Hi, i was wondering if this solution had been implemented in the latest honeybee and ladybug packages for dynamo?
i am currently running solar analysis, and the DCTimestep is taking quite a while. I have seen that it might be possible to run the DCTimestep in parallel,is there an example of how to update the dynamo nodes to do this?