I am working with a very heavy file and I’m trying to write some of the intermediary results to split the script. For doing so I have tried both the DumpHBObjects and Object-to-string of the new Ladybug Tools. Ideally, I need to write some Discontinuous Data collections (well, a few thousands of them) to a file and then read it back again with the same format/structure so I can take them down the workflow.
The options mentioned above don’t seem to function with Data collections, is there a specific way to do this or is it a bug?
The Dump object does write the json files, but they are not read back by the Load one.
Data Collections can be serialized to/from a dictionary or JSON but the honeybee serialization components are only meant for serializing honeybee objects. You can do the serailization to JSON with the following code:
import json
a = 'C:/ladybug/data_collection.json'
obj_dict = [data.to_dict() for data in x]
with open(a, 'w') as fp:
json.dump(obj_dict, fp)
And here’s how you can re-serialize it back:
from ladybug.datacollection import HourlyDiscontinuousCollection
import json
with open(x) as json_file:
all_data = json.load(json_file)
a = [HourlyDiscontinuousCollection.from_dict(data) for data in all_data]
ghenv.Component.Name = 'LB Dump Data'
ghenv.Component.NickName = 'DumpData'
import json
if run_ > 0:
obj_dict = [data.to_dict() for data in _data]
a = _filepath
with open(a, 'w') as fp:
json.dump(obj_dict, fp, ensure_ascii=False, indent=4, sort_keys=True, separators=(',', ': '))
#indent: Use 4 spaces to indent each line
#sort_keys: sort the keys of dictionaries.
#separators: To prevent Python from adding trailing whitespaces