dimanche 26 octobre 2014

EVERYTHING WE KNOW ABOUT THE RGB+Z ARRI MOTION SCENE CAMERA

arriscene-header2
In 2013, Arri debuted the Arri Motion Scene Camera at IBC. The system is a combination of traditional Arri Alexa Studio (Mirror Shutter) and a time-of-flight IR depth sensor. The camera is the result of a European research project called SCENE. The Motion Scene camera is capable of capturing traditional RGB color data along with Z-axis depth data from the same entrance pupil (lens/sensor) with the same field of view.
If you want to see the indie version (DSLR and an Xbox Kinect sensor) and what can be done with the new hybrid RGB+Z format, check out The Camera of the Future with Specular.
Add a real-time position/rotation tracking system, like the Google Tango, and you have the making of a futuristic virtual production camera system. More on that later, let’s jump right into the tech behind the Arri Motion Scene camera.

Time of Flight & Trifocal

Arri Motion Scene Camera at IBC 2012
Arri Motion Scene Camera at IBC 2012
Arriflex has teamed up with SCENE to create an extended Arri Alexa Studio that allows depth information to be recorded through the same lens as the RGB color data. To gather the depth data there are two key technologies.
1. The time-of-flight sensors that rely on sending infrared light (IR) into the scene and then recording the reflections. This is a similar technology that is utilized in the Primesense depth sensor used in the first Microsoft Kinect sensor.
Arri Motion Scene Camera with early time-of-flight IR sensors
Arri Motion Scene Camera with early time-of-flight IR sensors
You will see in the early prototype they have several small white exposed breadboards with IR lights and sensors. In the IBC prototype, they have a very cleaned up version on the sides.
2. Stereo or trifocal camera data. By using multiple cameras with identical focal lengths and a known-fixed distance, using photogrammetry, additional depth data can be extracted.
Arri Motion Scene Camera with an Alexa M and 4 trifocal cameras
Arri Motion Scene Camera with an Alexa M and 4 trifocal cameras
You will see an additional Alexa M camera using the same focal length lens as a primary stereo pair. In addition there are approximately 4 trifocal cameras setup arbitrarily, depending on the scene.
Alexa M used as a stereo pair with the Arri Motion Scene camera
Alexa M used as a stereo pair with the Arri Motion Scene camera

Callibration

Arri Scene Camera calibration charts
Arri Scene Camera calibration charts
Getting all of these different cameras to speak the same language, is one of the challenges of using this approach. First all of the cameras must be using a common coordinate system, which is accomplished by photographing a large checkerboard and calculating their offsets.
Second, each camera has a different frame rate and refresh rate, so you’ll see on their slate they use an LED array to measure the delay of each camera. Then erase that delay in post.
Traditional timecode slate with an LED array to calculate the delay of the different camera systems
Traditional timecode slate with an LED array to calculate the delay of the different camera systems
Finally each camera system has an inherent signal to noise ratio that must be respected. Several of these cameras do not have adjustable iris/shutter/exposure, so the scene has to be lit to a certain and constant light level.

Arri Motion Scene Cinematography

Set lit by KinoFlos and LED lights
Set lit by KinoFlos and LED lights
One of the major limitations of using time-of-flight IR sensors for measuring depth, is that you can’t use tungsten or HMI light sources. Both of those sources emit IR and would confuse the sensor. So as a cinematographer you are left using KinoFlo (fluorescent) or LED lighting.
Arri Motion Camera with a IR coated Zeiss lens
Arri Motion Camera with a IR coated Zeiss lens
The primary Motion Scene / Alexa Studio camera uses a specially coated IR lens to help enhance the quality of the depth capture. The rest of the cameras use “normal” unaffected lenses.

Processing Depth Data

Currently, TOF/IR sensors are very low resolution and relatively grainy. Software companies who are part of the SCENE project are busy at work figuring out ways to get the best quality depth data from the Arri Motion Scene camera.
At this point in time, they have the ability to view and work with the color mapped depth data in real time for virtual production/monitoring.

Applications

There are a lot of possible applications of the RGB+Z format but the two most relevant to cinematographers are as follows.

Eliminating the need for chromakeying (blue/green screens)

Arri Motion Scene camera Zdepth data for isolating elements of the frame
Arri Motion Scene camera Zdepth data for isolating elements of the frame
The post production team would be able to isolate people/backgrounds using the depth data, eliminating the need to light up large chroma key blue/green screens. The “mattes” are very noisy at this point in time but with better hardware/software solutions this technique could very well replace chromakeying.

Relighting in Post / Virtual Production

LIDAR scan of the set with 360 color / texture data
LIDAR scan of the set with 360 color / texture data
Using a combination of traditional long range LIDAR scans and 360 HDR photography, it is possible to create a 3D model of the set with the lighting. Using the depth data from the Scene Camera, it would then be possible to interactively change the lighting of the set AND the talent in post. Or on a virtual production set, you could change the lighting of the set in CG, instead of using real world lights.
A "Ladybug" 360 camera for capturing the lighting and texture of a scene
A “Ladybug” 360 camera for capturing the lighting and texture of a scene
A demo of this technology is shown in a demo at SCENE. The geometry and lighting is very rough, but the concept is clear.
A demo of relighting a scene using the depth data in post
A demo of relighting a scene using the depth data in post

Aucun commentaire:

Enregistrer un commentaire