Unity Tutorial

© 2020 Loom.ai

The Voice2Animation SDK allows to animate 3D characters inside Unity through text input or voice recordings. The characters need to conform with the Loom Facial Rig Specification which is an FACS-style rig similar to what is used in Apple's ARKit. The animations can be generated either inside the editor using the Loom cloud services or using a native plugin at runtime. Please contact us, if you want to integrate a run-time native version of our Voice2Animation SDK in your product

Authentication

A Loom cloud account is required to use the SDK. To create a free account visit the developer portal and press the "Get started" button to sign up.

Afterwards create a new client application on the developer portal using the "Client Credentials Flow" to get access to valid credentials. Add the new Client Id and Client Secret to your ClientApplicationConfig in Assets/LoomAi/Config. Please make sure to not share this file publicly as it allows third parties to access your loom account.

Examples

All Example scenes are located under Assets/LoomAi/Examples.

The first example scene 01_RecordVoice shows how the cloud service can be used to create animations. The amount of content you can create for evaluation purposes is limited through your client credentials. In case you want to use our service professionally please contact us.

  1. Open the scene and make sure you have your ClientApplicationConfig configured.

  2. Hit the "Text" button to bring up the text to speech interface. You can type in any text up to 250 characters and select a language in which it should be spoken in. If the text you entered is in a different language than the one configured, it will automatically get translated. Hit the "Send" button, and wait until the spinner in the top right corner disappears. The character should come to life and start talking.

  3. Hit the "File" button to bring up the system file open dialogue. Select any voice recording up to 15 seconds. Wait until the spinner in the top right corner disappears and the character comes to life and starts talking.

  4. Hit the "Record" button to start recording with your editors microphone. The recording is limited to 15 seconds. Wait again until the recording is processed and the character is animated before continuing.

  5. Hit the "Play" button to get an overview of the different recordings you created. When you click any of them, the character will start animating. Playing back previously recorded samples will not count to your evaluation quota.

The second example scene 02_PlayRecording demonstrates how the previously recorded performances can be played back at runtime without any connection to the cloud.

  1. Open the scene after you recorded your performances inside the 01_RecordVoice scene.

  2. Hit the "Play" button to get an overview of the different recordings you created. When you click any of them, the character will start animating.

  3. Hit the "Happy" and "Sad" buttons to trigger expression animations on the character.

Setup you own character inside Unity:

To get an overview of how the different animation components of the Voice2Animation SDK interact, we recommend to use the PlayRecording scene as a reference.

To make one of your own characters play a performance, you will need to re-create the Voice GameObject inside your own application. As your models and animations might be slightly different from our own, we will highlight configuration possibilities, in order to adapt to those differences in the following steps.

  1. Create an AudioSource to playback the voice

  2. Create a PlaybackVoiceController and connect it to the AudioSource.

On the same GameObject as the PlaybackVoiceController add a VoiceActivityAnimator. This component allows you to trigger an animation on your character based on the VoiceActivity signal, that is calculated by the SDK. It can be used to have the character change pose or behaviour, while he is talking.

  1. Connect it to the animator on your character that you would like to trigger.

  2. Configure a boolean trigger by providing its name in the Parameter Name property. It will be set to True or False on the Animator's RuntimeAnimationController, depending if the character is talking or not.

  3. Change the delays on the individual transitions to fit the length of your character’s talking pauses. A shorter delay will cause the character to trigger the animation change more frequently.

  4. You can add multiple VoiceActivityAnimators in case you want to trigger multiple states or multiple Animators.

To see an example of how this can be used inside an animation controller to create a realistic animation cycle between idle and talking animations open Unity's Animator window. Once you select the or the fred_anim_ctrl asset under LoomAi/Examples/SharedAssets/Characters/fred or the fred_rig_template scene node in the PlayRecording scene, you can inspect the example in the Animator Window. It features two looped animations happyIdle and activeTalk. They are connected by transitions with their own animations to make the character switch seamlessly depending if the isTalking parameter is set or not.

On the same GameObject as the PlaybackVoiceController add a VoiceFrameAnimator. This component allows you to drive the blend shapes of your characters’ mesh with the shape weights calculated by the SDK. This will allow your character's lips and tongue to move according to the words it is speaking.

  1. Connect it to the SkinnedMeshRenderer on your character that you would like to drive.

  2. Configure a BlendShapePrefix string in-case your mesh’s blend shape names do not match exactly to the specification. E.g. use prefix hero_BSN. to drive hero_BSN.c_JD

  3. BlendFactor configures the blend between Animations driving the blend shapes and the SDKs blend shapes. A value between 0.6 and 0.9 is recommended.

  4. Enable RecalculateCorrections to reset the correction shapes before rendering them. This should be enabled when you are using multiple animation layers additively or animations that do only drive the base shapes, but not the correction shapes of a mesh.

  5. Configure the blend shape reset behaviour. When ResetToNeutral is enabled, the SDK driven blend shapes will be smoothly transitioned to 0 over ResetDuration, after not receiving frames from the SDK for the duration of the configured ResetDelay. This can be used to make the face of a character return to resting position, after speaking a very abruptly ending voice recording.

  6. You can add multiple VoiceFrameAnimators if your character consists out of multiple meshes and you want to drive them all simultaneously. Configure each VoiceFrameAnimators to drive a different SkinnedMeshRenderer.

Tests

The included unit tests need the ClientApplicationConfig to have valid Credentials for the Client Credentials flow. You can run the tests by opening Window > General > Test Runner and triggering "Run All" in "PlayMode".

Realtime native animation (pro)

A valid partner license is required to use the realtime animation library inside your application. Such a license will be provided to you through a secure channel, once you sign up as a partner with us. The native binaries are 64 bit only and available for Windows 10, Android 9.0, iOS 12.3 and macOS 10.14. Make sure to switch your projects' build settings to produce a 64 bit build.

The 03_RealtimeDemo example scene shows how the native library can be integrated inside a Unity project to create realtime animations at runtime. The setup in terms of components is very similar to the basic version of the SDK. To get started follow these steps:

  1. Open the scene

    Assets/LoomAi/Examples/03_RealtimeDemo/RealtimeDemo.unity

  2. Add your license file to the project's Assets/LoomAi/Config

    directory and link it on the LoomTracker component on the

    Character/Voice Hierarchy node.

  3. Hit Unity's Play button.

  4. Hit the "Record" button to start recording.

  5. Hit "Stop Recording" or wait for the 30 seconds to end.

  6. The Character in the scene should start to animate and talk back at you.

The 05_LiveMicDemo example scene shows how Unity's microphone can be connected to the realtime animation library. Remember to link the license file on the LoomTracker component. Make sure to configure the AudioSettings correctly, when using this setup in your own project:

  1. Default Speaker Mode should be "Mono"

  2. DSP Buffer Size needs to be set to "Best Latency" or in case of

    dropped audio samples to "Good Latency"

Last updated