Back in 2014 even before Lucid was established as a company, co-founders Han Jin (CEO) and Adam Rowell (CTO) had set themselves the goal to improve robots' vision and sense of their environment, focusing on the eyes as a dual camera. What really launched the company was the LucidCam 180º 3D VR consumer camera they proposed to consumers on Indiegogo in 2015. The compact camera doesn't have a depth sensor and 3D feature extraction is done purely in software with the help of a well-trained machine-learning algorithm, which the company says gives results on par with depth-sensor-equipped devices but without the added costs.
At Mobile World Congress Shanghai, Lucid announced it wants to scale its core AI-enhanced 3D software technology into dual- or multi-cameras mobile and smart devices, including smartphones, drones, smart speakers and robotics.
Software-based 3D feature extraction isn't new of course, yet most dual-camera smartphones, drones and robots also sport a depth sensor for good measure. So what is it that makes Lucid's solution so compelling for OEMs to license it? We asked Lucid's CEO during a phone interview.
Jin first gave us a small market overview, noting that although dual cameras have been around for years, it is only over the last few years that those devices have benefited from more GPU power and connectivity.
"Back in 2012 started the 3D hype, driven by more powerful CPUs and GPU for advanced computer vision. But now everything is more connected, they are no longer isolated devices whose content you have to export to a microSD card. 3D cameras are connected through apps, smartphones, internet, yet stereoscopic data has not been looked at" explains the CEO.