How to load the results from a file into Grasshopper using Honeybee[+] API?

Cool! I’ll ask him about it on Monday then.

Hi @MohammadHamza, @mostapha,

Thanks for sharing the reload component, I can see how useful it would be for me too.

Unfortunately I get the following errors when I try to use it, to reload a DC analysis:

Any idea why this is happening ?

I have plugged the same list of hoys as in the original run.
Is this component meant to work only for subhourly sunlight hours analyses ? The moy value seems to be involved in the issue here…

Many thanks for your help.

Jocelyn

Hi Jocelyn,

Could you please share your *.ill file and *.pts file that you are trying to load? Also, please attach the hoys list (maybe as a normal text file or grasshopper file, whatever works for you) that you are using? It would allow me to take a closer look.

Best,
Hamza

Hi Hamza,

see in the link below.
What I would like to be able to do is load much heavier files (after processing them externally to recompose blind combs outside of GH) but let’s start with this lighter one (still 1700 points).

Many thanks,

Jocelyn

hi @JocelynUrvoy,

A quick alternative but hacky way would be to open a same grasshopper definition that wrote the file you processed externally and run the simulation in GH (also make sure that the reuse mtx is turned to true). As the black window appears let it run until the oconv or rcontrib functions appear (10 sec) and CTRL+C to close. This will write empty result files in the result folder inside your ladybug folder.

Collect the scene.ill or any other relevant result you processed externally and overwrite the files there. While the toggles in GH were left to true, you just need to induce a small change upstream in the definition for it to refresh (what I was doing was to change plug or unplug a panel in the radparameters (adding “-ab 5” for example), this will not make any change to the simulation.

Wait for the reload of the results and there should show up.

Good luck

Thanks for sharing your files @JocelynUrvoy .

I am also getting the same error, and honestly, I am puzzled myself. Out of curiosity, where did you generate the list for the hoys ? I am assuming this is the same list you supplied to the SkyMatrix component earlier to run the simulation as well?

In the meantime, you can follow the excellent the suggestion from @OlivierDambron. Just make sure you set the ‘reuse Dmtx’ input to true. However, I believe you don’t need to manually close any terminal windows that pop up. Just run the simulation again and let it do it’s thing, and hopefully it should present you the results without any hassle.

Sorry coudln’t help you out on this one. I will let you know if I figure a way out…

The problem was because of _hoys you provided is a list of string values, causing the set_values_from_file() not parsing the hour correctly. The fix is quite simple, input them as numerical values.

@MohammadHamza maybe you should write a check + warning message for this in your component.

image

2 Likes

Hi @vhoang,

Thanks for the solution, indeed that was the issue and the files now load nicely and quickly !
I should have thought about the data format !

thanks @OlivierDambron for the tip as well - I have tried to reload the results using ReuseDmtx before but the calculation time for the first step of calculating the daylight matrices was still taking some useless time. Your trick would work to avoid that I guess.

For the record, I am now able to load a 1.7Gb .ill file (about 27k points for 2800 hours) in 4.4min, which is acceptable.
Next step for me is to figure out how to lighten the .ill files (write an integer lux value rather than the full scientific float ?), or do the addition of several window groups contributions externally.

Cheers

@JocelynUrvoy

Using the component of @MohammadHamza on reloading a large file with 11k points for 8760 hours, all my ram got taken and it takes a long time. Did you find an effective way to split the results in post-processing?

perhaps a sub-annual simulation leaving out the night hours where 0 takes considerable space in the .ill files.

1 Like

@OlivierDambron

Yes I always remove the night hours by testing if the global horizontal illum is > 100 lux.
When combined to an occupancy schedule 8am-6pm, that leaves me with only 2800 hours which is slightly better.

Then I am lucky enough to have a decent of RAM on my work machine (64gb) but 2800 hours x 27k points = 1.7Gb still takes 4min to load and makes the rhino session to use 20gb of RAM.

Bit of script attached.
ALJ_daytime_discourse.gh (486.9 KB)

It is my understanding that DA should only account for hours that are both occupied AND daylit … ie the dark hour between 8am-9am in winter doesn’t burden your results.
But I’d be glad to hear I am not the only one interpreting it as such :smiley:

1 Like

@vhoang Thanks for the fix.

@JocelynUrvoy Glad to know you got it to work, and thanks for sharing your progress with us. Yes, I second your interpretation of Daylight Autonomy as ‘Daytime’ Daylight Autonomy. I especially like your idea of trimming down the .ill files of night hours.

@OlivierDambron Yes, this component is not so effective for big files. I have already pushed it to the limit and it just completely fails above 40-50 points, even on a 64GB RAM system. In fact, grasshopper crashes out after utilizing about 50 GB of RAM, with another 14GB still left to spare. This is a very temporary solution, as Mostapha hopes to release the more elegant SQL database implementation for result management soon.

@JocelynUrvoy and @MohammadHamza

I wonder what would happen if we trim the Night hours and later set Occupancy hours for annual metrics that go beyond at certain time of the year. (the fixed occupancy of 8-20 in winter when sun sets at 17).

Would it be possible to calculate DA[300] for daytime hours ?

@OlivierDambron

For that I use the “AND” gate between the occupied hours and the daytime hours in the script I attached.
Circled in red below

Hello everyone,

The link that @JocelynUrvoy has shared is now expired! Can anyone share with me the result of a simulation with couple of 1000 points. I’m testing the performance of the new implementation and I want to see how good/bad it will work against larger data-sets.

I will share a code sample here soon (in a couple of days) so you can also test it on your side.

Thanks!

Hi @mostapha

here’s a new link with the same content (this was before this fix How to load the results from a file into Grasshopper using Honeybee[+] API?)


It’s 1700 points.

I’ve got some heavier files if you prefer (25k points), just let me know.

Looking forward to test the new version !
Thanks Mostapha

Hi @JocelynUrvoy,

Thanks! I don’t actually need the Grasshopper file. Well, I will only need the list of hours if you didn’t run it for the whole year but that is fine. I can just put some random hours. If possible I will need the 3 ill files in the results folder. In the new workflow the final results will be generated inside the database from the 3 other calculations.

Let’s go for 25K points. This one was loaded to database successfully and I could process it with no issues but it was a little bit of cheating as I was only loading the final results. :wink:

Hi Mohammad,
does this still work with HB[+] version 0.0.05?

In my case, the component will output the following error:

  1. Solution exception:Multiple targets could match: int(type, IList[Byte]), int(type, object), int(type, Extensible[float])

I was not getting the error with 0.0.04. Any ideas as to why?

This should fix the issue:

You can also avoid the issue by passing the start line number:

Change

analysisGrid = AnalysisGrid.from_file(_pts_file)

to

analysisGrid = AnalysisGrid.from_file(_pts_file, 0)
2 Likes

Yes, this fixed it, thank you!

hi @mostapha

Thanks a lot!
The .ill and .pts files are obviously flattened. I didn’t manage to re-split/re-organize the results with the same tree structure with multiple analysisgrids, is there a way to retrieve that from the recipe? I wonder where the gridnames are stored.

  • didn’t manage to use AnalysisGrid.name
  • I was not able to use the start_line and end_line correctly so as to split the results using consecutive domains, given that I know the length of results I am trying to obtain.