Suga’
2020-2021
An immersive experience that features live dance performance as volumetric video in social virtual reality space.
A collaboration with Valencia James, Thomas Wester, Simon Boas, Marin Vesely, Sandrine Malary, Carlos Johns-Davila, and Terri Ayanna Wright
Suga has been included in the Gray Area Festival, SIGGRAPH's Art Gallery, and Sundance’s New Frontiers
About Suga’
Suga’ developed out of Volumetric Performance Toolbox in the context of Eyebeam's Rapid Response program. Volumetric Performance Toolbox set out to make accessible hardware and tools for performers to project themselves into shared virtual spaces in Mozilla Hubs.
My involvement in Suga included the creation of meshes and point clouds for the piece and code contributions to the Volumetric Performance Toolbox.
The majority of the time I spent working on Suga went into processing a massive 20gb LIDAR scan of the Annaberg Plantation by Cyark into a mesh or point cloud with a small file size that could be viewed on a range of devices for the web. The biggest challenge in doing so was finding a toolset that would be able to do so without running out of memory. Usually for pointcloud reduction tasks I would go to a tool like Meshlab but I found the process to be prone to crashes. Ultimately working with PDAL (Point Data Abstraction Library) and Python I was able to reduce the pointcloud in a way that didn't overwhelm my hardware.
From there I was able to make aesthetic decisions about abstraction and meshing using Blender and Meshlab. Still working with a large amount of data in Blender I also worked to create solid baking workflows from the high density meshes to simplified and textured models.
Here are some of those models in the context of the piece with scenes designed by Marin Vesely.
Valencia inside of the baked mesh version of the sugar mill from LIDAR
Point cloud version of the Sugarmill from LIDAR
Sugar mill in landscape made from aerial image dataset
Forest of Commemoration point cloud from LIDAR
In the realm of code contributions, I helped develop an interface that masked and encoded streams from the Azure Kinect to be rendered in Hubs. I worked closely with Thomas Wester to bridge the gap between Hubs and the Kinect input.