Meta Reality Labs is focused on pushing the bounds of Augmented Reality scene generation; this year I had the opportunity to contribute my skills and grow in web development, 3D graphics tooling, and app construction.
To accelerate AR research, we needed to think bigger.
Aria Sensors capture your environment and process with Machine Learning
With VR and AR a difficult field to obtain research funding, I was tasked with improving the website to broaden their reach. This included adding email integrations for users to engage with downloads, and create a fully customized Blog built off existing Facebook & Meta infrastructure.
The tooling in place was difficult to use. So with guidance from PhD researchers, I went to work updating various tooling components, including scaling, translation, rotation, and point cloud colorization.
Project Aria is a vast ecosystem of volunteer testers and layers of researchers, utilizing Machine Learning to understand our world. The tooling however, had a cumbersome interface.
So my first task was understanding the tool and providing guidance on how to add colorization to Point Cloud visuals to help understand the scene better. While I improved the bounding box experience, ultimately we decided on ML-assisted silhouettes to make event capture even more intuitive.
The datasets I contributed to are all open source, but increasing engagement is always a block. So we built a standalone app called Aria Studio to manage and organize streaming files from Aria headsets directly.
This required a custom React app with secure login, local storage, API customization, hardware integration with headsets, and UI overhauls to make it easier for users.
When group projects move quickly, user experience can get lost in the cracks. I take a dedicated approach to making sure the important information is always presentable and responsive for mobile.