I wrote a couple of apps for my Pixel phone to record lots of data from the many sensors found on the device, using ARCore and other computer vision algorithms.
I wrote tools to import and manipulate the datasets into Houdini (but this can easily be extended to other 2D and 3D visualization softwares).
Once I parse the data in Houdini, I start exploring the patterns I see and imagine ways to create and visualize the procedural geometry found in such datasets.
The mobile device 3D trajectory and point cloud as computed by ARCore. Points have unique IDs and confidence factors.
Detected horizontal and vertical planes and their boundary polygons are converted to primitives.
Oriented points can be computed as I move around with the Pixel device. They represent micro oriented surfaces as computed from the point cloud.
Light estimation and pixel intensity values are also recorded along the trajectory.
Audio levels as captured by the device microphone can be plotted along the trajectory.
Words spoken while recording are transcribed using Google’s speech to Text cloud services. Start and duration of each word can be easily placed along the trajectory.
I run object recognition on images captured as I’m moving around, extracting and placing in 3D space along the trajectory object labels, OCR words, brand logos, faces, emotions and pose skeleton.