A team from Facebook Reality Labs (FRL) has developed an approach to create lifelike avatars that are animated in real-time to improve social connections within VR environments.
The researchers will show the system dubbed Codec Avatars when they present a paper at SIGGRAPH 2019, held July 28 to August 1 in Los Angeles.
Shih-En Wei, lead author of the paper and a research scientist at Facebook, said: "Our work demonstrates that it is possible to precisely animate photorealistic avatars from cameras closely mounted on a VR headset."
A detailed and accurate likeness to a user is created with precise tracking of a user’s facial expressions using headset-mounted cameras.
A "training" headset prototype is equipped with cameras on the regular tracking headset for real-time animation as well as cameras at more accommodating positions for ideal face-tracking. An artificial intelligence technique based on Generative Adversarial Networks (GANs) performs consistent multi-view image style translation to automatically convert HMC infrared images to images that look like a rendered avatar but with the same facial expression of the person.
Wei said: "By comparing these converted images using every pixel--not just sparse facial features--and the renderings of the 3D avatar we can precisely map between the images from tracking headset and the status of the 3D avatar through differentiable rendering. After the mapping is established, we train a neural network to predict face parameter from a minimal set of camera images in real time."