In May, Lucid announced that it was working with Hollywood camera maker RED to create a specific piece of camera hardware for RED’s upcoming Hydrogen One phone. With that announcement behind them, Lucid appears ready to take their technology into the broader smartphone market.
The company announced this week that it has plans to scale its core 3D software technology into more mobile and smartphone devices that utilize a dual- or multi-camera setup—a la the larger iPhones, Samsung’s Galaxy phones, and most other smartphones that are released into the wild these days. By doing so, Lucid explained, users will have access to their real-time 3D Fusion technology, which “mimics how the brain processes and learns.” That, in turn, will enable device manufacturers to leverage their camera systems to capture 3D images and greater depth.
“We see this as a unique opportunity where our technology syncs with the acceleration of the industry as dual cameras move into many more devices,” Lucid CEO Han Jin said in a statement.
Lucid founders Adam Rowell (left) and Han Jin | Credit: Lucid
According to data cited by Lucid, 300 million smartphones with dual-cam systems were shipped in 2017. That number is expected to see 400 percent growth year over year, and we could see 50 percent market penetration by 2020.
The company initially began its research of replicating human vision capabilities with their LucidCam product that launched last summer, though the technology itself has been in development for roughly three years. The LucidCam, which uses a 180-degree field of vision, is a dual-camera system that was launched to give users a way to create immersive VR experiences that mimic seeing through someone else’s eyes.
Lucid explained that its pure software based solution will provide these dual-cam smartphones (as well as robots, drones, security cameras, and more dual- or multi-cam systems) human-like depth perception, which would eliminate the need for “expensive and space consuming hardware in those products. The 3D Fusion technology uses AI and machine learning along with historic data in the cloud to provide “added vision intelligence” to deliver more accurate 3D and depth perception.
“The way we as humans accurately perceive three dimensions and distances is not solely based on our two eyes but rather a combination of experience, learning and inference,” Jin said. “As chips and servers begin to approach the processing power of our brains, we can mimic this intelligence in software only, using AI and data on top of dual cameras.”
Lucid said it’s already working with “several” manufacturers across the mobile phone, camera, robotics, computer, and drone markets to integrate its software into their next-gen products.