A particularly complicated task when inserting actors recorded by three-dimensional full-body capturing into new (computer-generated or real) scenes is the realistic adaptation of the figure&s lighting to the respective environment. Normally, the lighting situation of the shot (including the shadow cast) is the texture of the 3D model, so to speak, burned in and cannot be subsequently adjusted, depending on the lighting situation in a new scene.
The new system from a Google team of researchers solves this task with a new, very special 3D capturing system consisting of 331 LED lights arranged spherically around the actor plus 90 12.4 megapixels (RGB and IR for depth information) cameras with a resolution of 4,112 × 3,008 pixels, which capture images at 60 frames per second. From these images plus reflection and depth information, a three-dimensional model is reconstructed including real textures, light reflections and shadows.
Volumetric Capturing of a Person
Using this information, the moving figure can then be photorealistically inserted into new scenes in real time and realistically displayed on the body (and clothing) in any lighting situation, including shadows cast. The LED lighting is used to create high-resolution reflection models of the figure in two defined colour moods. A 10s (600 frames) long image requires approx. 650 GB of data.
So far, there have been several methods to realistically illuminate faces (e.g. Deep Learning www.slashcam.de/news/single/Die-Lichtsetzung-von-Portraetphotos-nachtraeglich-aen-15164.html) or buildings (e.g. www.slashcam.de/news/single/Lichtstimmung-in-Drohnen-Videos-von-Bauwerken-nach-15165.html), but none for high-resolution volumetric capturing of whole bodies.