Nvidia Audio2Face

The Audio2Face facial motion can be exported to .json files that can be imported into Blender via Faceit. This section will summarize all information that is necessary for you to get started using Audio2Face with Faceit.

Export JSON Files from Audio2Face

There are many tutorials out there for Audio2Face usage and the tool offers many possiblities. This section only outlines the quickest way create an animation and export a json file.

  • Open Audio2Face
  • Top menu -> Audio2Face -> Open Demo Scene -> Default Scene
  • Content Browser -> omniverse://localhost/NVIDIA/Assets/Audio2Face/Samples/blendshape_solve/
  • Drag male_bs_46.usd to the scene.
  • Go to A2F Data Conversion tab
  • Blendshape Conversion ->
  • Input Anim Mesh: c_headWatertight_hi
  • Blendshape Mesh: male_bs_46
  • You can now load Audio files in the Audio2Face tab and both models will be animated.
  • Once an Audio file is loaded you can click Export as Json in the Data Conversion Tab
  • Save the scene, so you don't have to set up the blendshape solving everytime.
  • Import into Blender.

Load the default scene.

Add the default blendshape model.

Set up the Blendshape Conversion.

Export as Json (Batch export available too).

Target Shapes

With the addition of the Audio2Face importer a respective Audio2Face target shape list has been added (similar to the arkit target shapes). The list can be auto populated with a simple button press as long as some or all of the 46 Audio2Face shape keys are found on the registered objects. If the list is not populated, the importer won't be able to load any animation. You can also manually populate the list with custom shape keys to create a retargeting scheme for the Audio2Face imports. Learn more about the target shape list functionalities here

a2fshapes Switch between ARKit and Audio2Face target shapes. Both can be used in parallel.


You can either create the 46 A2F expressions via the Audio2Face app or laod the A2F expression preset during the Faceit rigging process (version 2.1+ only).

Load Recorded Motion (JSON)

The process of importing motion from a json file from audio2face is very similar to the other text type importers (see 1 and 2). Before starting the import you will need to specify the file path to the json file.


Options (Json Import)

  1. Imoprt to new Action
    • If this is enabled, Faceit will create a new Action to hold the new Keyframes.
    • Otherwise you can specify a Shape Key Action from the scene.
  2. Mix Method
    • Overwrite: overwrite the whole animation with the new keys.
    • Mix: attempt to mix in the new keyframes. Overwrite overlapping frames for the same shapes, but preserve everything else.
    • Append: append the keyframes to the end of the specified action.
  3. Start Frame
    • Specify a Start Frame for the new animation values. If Append mix method is active, the start frame will be read as offset to the last frame in the specified action.
  4. Frame Rate
    • Audio2Face allows to export with a user set frame rate. Make sure to import with the same frame rate that was used for the export. (default: 60.0)
  5. Region Filter
    • The new importers allow to specify facial regions that should be skipped during import. This allows, for instance, to animate the mouth expressions via Audio2Face while animating the rest of the face via ARKit.

(Optional) Specify which regions should be imported.