Google project Tango is the latest example of increasingly-sophisticated sensors making their way to portable computing platforms such as phones, tablets and virtual reality goggles.
This mobile device includes several sensors:
- An orientation sensor, likely providing yaw/pitch/roll, angular velocity and linear rotation. These have been fairly standard in modern mobile devices.
- Cameras for both visible and IR spectrum. These are color cameras that aside from the usual RGB image also have pixels that sense near-IR wavelengths.
- A depth sensor, providing real-time mapping of the distance (e.g. depth) of various points in space – such as walls, people, hands – from the sensor. As an aside, there are several ways to sense depth: structured light such as the Kinect which projects a seemingly random pattern of dots into space and analyzes their reflections, time of flight such as SoftKinetic which measures the round-trip time it takes for light to return to the sensor and single- or multi-camera solutions that use image processing to estimate depth.
Technically, what is unique about the new platform is that it has dedicated power-efficient vision processors that allow it to continuously analyze, fuse and decode the information from the various sensors. This is news because previous processors consumed the mobile battery too fast for continuous use. But, the real reason to be excited about project Tango is that it provide both extra motivation as well as a hardware platform to many developers so that they can develop new 3D applications and increase awareness for the power of sensors.
I’ve written many times about the importance of sensors in goggles as a way to turn “dumb goggles” into “smart goggles”, so I am a believer. It will be fun to see some the new applications that come out of project Tango, as well as how these types of sensors could be used in goggles in the future.