VFX has made the unreal world resemble reality as closely as possible. Now through a novel VFX technique, more accuracy and verisimilitude will be soon achieved, generating high fidelity 3D representations from a single picture to create animations in real-time.
Avatar may have introduced the layman to motion capture technology, but it requires painstaking effort and expenses. But a new Artificial Intelligence breed and machine learning tools will combine to change the scenario, taking us one step closer to the technological utopia. The same combination has recently developed a new facial tracking that will not just speed production but also deliver even greater fidelity.
French software developer Dynamixyz and Pixelgun Studios, a Californian 3D high-end scanning agency join hands to bring this technology to the fore.
The Dynamixyz / Pixelgun solution scanned data with textures covering 80 expressions taken from 63 cameras trained on the head for expression capture, and 145 cameras trained on the subject for body capture. The data is then extracted automatically and put onto a digital model, eliminating the need for an animator.
“With these images generated as if they were taken with a Head-Mounted Camera and geometry information, we were able to build a tracking profile as if it had been annotated manually,” explained Vincent Barrielle, R&D engineer. “It brings higher precision as it has been generated with very high-quality scans. It also gives the opportunity to have a high volume of expressions”.
“We think other companies may have already developed such workflows in-house, as a tailor-made solution, not as a packaged software,” says Nicolas Stoiber head of R&D and CTO of Dynamixyz. “We make technologies accessible and usable for the whole industry.” The companies plan to make the technology-driven by accuracy available next year.