Pollination for Exploring Multi-Dimensional Data Release



I’m very pleased to announce the first release of Pollination as a part of Honeybee development. Pollination is the first part of a series of web-based developments for Ladybug+Honeybee to explore possibilities for better data visualization. Pollination<Vis> in particular is designed for exploring multi-dimensional data.

The idea of Pollination born at the AEC-TECH Hackathon last year. I want to thank all the team members for the great original work. Also many thanks to Kai Chang for developing ParallelCoordinates library for D3JS. Creating Pollination would be very very hard without his development.

Here is a very short video that shows how you can interact with data and load your own files. As you can see in the video I added to components to Honeybee that helps you generate an structured .csv file from Grasshopper. You can also generate the files manually.

The data doesn’t need to be results of Honeybee or Ladybug. Feel free to use pollination for any other studies including your multi-dimensional optimization cases!

OK! That’s all I have for now. Check the webpage here and the source code on github.



PS: If you are interested in data visualization and multi-dimensional data then I strongly recommend you to watch this presentation by Kai.


Great work !!


Great work Mostapha and team! I am currently working on similar work but in Grasshopper which I hope to release this summer as a plugin for GH for multi-objective optimization problems using Octopus. The potential of visualizing data using a web-based app is greater and I hope to combine what you have done here with the work I have done as well - it would be pretty cool! Attached is what I have been able to achieve so far:


Great Work guys can’t wait to try it

Keep on the good work


Great work Mostapha!

Can’t wait for this to be fully integrated with the Honeybee energyplus components.


Very nice!!

Thanks Mostapha and Pollination team.



Great - as usual!


hm, i missed this post somehow, this is great!

also made me think about the looking for some tools to turn csv data into a JSON that has some structure like nested columns and etc, sort of like a page like this. If you push a little plus mark the column will unfold and expand.

I am volunteering currently for this project when I have time https://github.com/CityOfNewYork/CROL-PDF

and there is a discussion of standardizing information extracted from pdfs into csv, then eventually to a json schema so that a large amount of information can be shared easily online and also visualized to allow filtering and making operations on a data set. The closest thing I know is http://openrefine.org/ but it is not an online app and not really a collaborative platform.

anyhow, columns with nested columns showing imgs with tables that corresponds to a set of paramters might be a good suppliment to parallel coordinates. since honeybee and ladybug now has a quite a large collection of analysis so if there is a way to create a pipeline from grasshopper to, yeah, something like github.io like you have it here, with a well designed JSON schema with fold-expand columns it might be a great way to keep track. just an idea, dont know how this could be achieved yet.


Hi Yassin,

I read your research paper ‘Optimizing Creatively in Multi-objective Optimization’ and I find it really interesting. Being able to compare the form as shown the image from your research (attached) is really really useful. Would you please be able to explain in more detail how you managed to get the different forms colored based on their performance?

Thanks so much for your help.



Hi Dan,

Sorry I didn’t get back to you earlier.

Basically, I extracted the maximum and minimum of whatever domain (e.g. daylight factor) I was interested in, remapped the values of the domains to the default (0 to 1). Following that, you simply input the new range into the color gradient, which translates the 0 to green as in the case above, or 1 to red. I have attached the script that I used for my thesis which include the other tools that were set up as part of my thesis. Please let me know if you need anymore clarification.



Tools documentation.gh (114 KB)


Hi Yassin, thanks for this - will have a look at your script!

Yes, I took the same approach - remapping values from 0 to 1 for all objectives, however, haven’t worked out yet how to copy all the meshes into rhino so that is my next step. I am guessing that once they are all in the same range (i.e. 0 to 1) you can start adding/subtracting them to get your best performing solution or you could even weight the objectives based on your own criteria (i.e. daylight more important than cooling load). Hope this makes sense! :slight_smile:





I didn’t copy anything into Rhino (or bake). I internalized all the meshes that were produced into a ‘mesh’ bucket. Using the indices - which would be the same as the ID number - I would be able to relate the quantitative data to the the 3D mesh, i.e. mesh ID 1 would extract index 1 or row 1 of the respective values. The downside of that workflow though is you would be storing anywhere from 1000 to 3000 solutions which would make your gh file significantly large.

Concerning weights: I didn’t dive into that part but through ‘filtering’ you do somewhat subjectively add weight to solutions, e.g. solutions that perform in the 95th percentile of daylight, etc.