Motion Capture - General

Faceit comes packed with a bunch of utilities for importing and retargeting captured motions to any 3D character models. Read through the following sections to learn all about the Faceit animation and motion capture workflows.

Prerequisites

Before you can start using any of the mocap operators, make sure that the target objects are registered in the setup panel and the target shapes lists are populated with the correct shape keys.

Motion Capture Engines

Faceit supports the popular and accessible ARKit and Audio2Face platforms for high quality performance capture.

Apple ARKit

Read all about the ARKit and the ARKit utilities here

  • ARKit Expression Preset
    • Create the 52 required shape keys.
  • Retargeting
    • Retarget the ARKit expression to remotely similar FACS based expressions (e.g cc3).
  • Control Rig
    • Create a slider-based bone Control Rig, specifically built for controlling the ARKit expressions.
  • Import
  • Realtime

Nvidia Audio2Face

Read all about Audio2Face and the A2F utilities here

  • Audio2Face Expression Preset
    • Create the 46 required shape keys.
  • Retargeting
    • Retarget the ARKit expression to remotely similar FACS based expressions (e.g cc3).
  • Import
    • captured motions as .json files.

Utilities

The motion capture Utilities panel holds helper functions for keyframe manipulation. These operators can be especially useful when cleaning mocap files or combining animations from multiple sources (e.g. ARKit and Audio2Face).

mocaputils
Quickly remove keyframes for all shapes in the eye and brows regions within the given frame range.

General Settings

Action

You can quickly create and assign actions to the registered objects. If you enabled head and eyes transforms the operator will create an action for those object with a suffix _head, _eye_L and _eye_R

Note

You do not need to create actions in this step. All Faceit Mocap operators give you the option to create actions at the time of import.

Choose Motion Type

While Audio2Face animates only blendshape values, the ARKit records head and eye movements too. To import these transform animations next to the blendshape weights, you need to activate the respective motion type in this panel. Choose which type of motion you want to import/capture. If you do not care for head or eye motion to be recorded as transforms, then you can ignore this step.

Experimental

The option to capture head and eye transforms is an experimental feature. It needs some optimization.

Options (Targets)

From the Targets section of the Face Cap options choose which type of motions you want to record.

  1. shape keys (default)
    • this will stream/import only the Shape Key motion for all registered objects.
  2. Eye Rotation (experimental)
    • this will allow you to choose an object in your scene that receives the eye rotation motion as Euler Angles. You only have to do this if your characters eyes do not have the eye shape keys.
  3. Head Rotation and Location (experimental)
    • this will allow you to choose an object in your scene that receives the captured head rotation and location.

If you choose head or eye movement, you will have the option to automatically create empties as targets for the motions. Alternatively you can also choose your own target objects by clicking the pipette icon.

face_cap_targets
You can automatically create an Empty as target for the head motion. The same is possible for the eyes.

Tip

The Delta Transformations dropdown below gives you the option to modify the orientation of the object without it being overwritten by the captured motion. You can also edit it in the object data tab.