© 2020 Loom.ai
The Voice2Animation SDK allows to animate 3D characters inside Unity through text input or voice recordings. The characters need to conform with the Loom Facial Rig Specification which is an FACS-style rig similar to what is used in Apple's ARKit. The animations can be generated either inside the editor using the Loom cloud services or using a native plugin at runtime. Please contact us, if you want to integrate a run-time native version of our Voice2Animation SDK in your product
A Loom cloud account is required to use the SDK. To create a free account visit the developer portal and press the "Get started" button to sign up.
Afterwards create a new client application on the developer portal using the "Client Credentials Flow" to get access to valid credentials. Add the new Client Id and Client Secret to your
Assets/LoomAi/Config. Please make sure to not share this file publicly as it allows third parties to access your loom account.
All Example scenes are located under
The first example scene
01_RecordVoice shows how the cloud service can be used to create animations. The amount of content you can create for evaluation purposes is limited through your client credentials. In case you want to use our service professionally please contact us.
Open the scene and make sure you have your ClientApplicationConfig configured.
Hit the "Text" button to bring up the text to speech interface. You can type in any text up to 250 characters and select a language in which it should be spoken in. If the text you entered is in a different language than the one configured, it will automatically get translated. Hit the "Send" button, and wait until the spinner in the top right corner disappears. The character should come to life and start talking.
Hit the "File" button to bring up the system file open dialogue. Select any voice recording up to 15 seconds. Wait until the spinner in the top right corner disappears and the character comes to life and starts talking.
Hit the "Record" button to start recording with your editors microphone. The recording is limited to 15 seconds. Wait again until the recording is processed and the character is animated before continuing.
Hit the "Play" button to get an overview of the different recordings you created. When you click any of them, the character will start animating. Playing back previously recorded samples will not count to your evaluation quota.
The second example scene
02_PlayRecording demonstrates how the previously recorded performances can be played back at runtime without any connection to the cloud.
Open the scene after you recorded your performances inside the
Hit the "Play" button to get an overview of the different recordings you created. When you click any of them, the character will start animating.
Hit the "Happy" and "Sad" buttons to trigger expression animations on the character.
To get an overview of how the different animation components of the Voice2Animation SDK interact, we recommend to use the PlayRecording scene as a reference.
To make one of your own characters play a performance, you will need to re-create the Voice GameObject inside your own application. As your models and animations might be slightly different from our own, we will highlight configuration possibilities, in order to adapt to those differences in the following steps.
Create an AudioSource to playback the voice
Create a PlaybackVoiceController and connect it to the AudioSource.
On the same GameObject as the PlaybackVoiceController add a VoiceActivityAnimator. This component allows you to trigger an animation on your character based on the VoiceActivity signal, that is calculated by the SDK. It can be used to have the character change pose or behaviour, while he is talking.
Connect it to the animator on your character that you would like to trigger.
Configure a boolean trigger by providing its name in the Parameter Name property. It will be set to True or False on the Animator's RuntimeAnimationController, depending if the character is talking or not.
Change the delays on the individual transitions to fit the length of your character’s talking pauses. A shorter delay will cause the character to trigger the animation change more frequently.
You can add multiple VoiceActivityAnimators in case you want to trigger multiple states or multiple Animators.
To see an example of how this can be used inside an animation controller to create a realistic animation cycle between idle and talking animations open Unity's Animator window. Once you select the or the
fred_anim_ctrl asset under
LoomAi/Examples/SharedAssets/Characters/fred or the
fred_rig_template scene node in the PlayRecording scene, you can inspect the example in the Animator Window. It features two looped animations happyIdle and activeTalk. They are connected by transitions with their own animations to make the character switch seamlessly depending if the isTalking parameter is set or not.
On the same GameObject as the PlaybackVoiceController add a VoiceFrameAnimator. This component allows you to drive the blend shapes of your characters’ mesh with the shape weights calculated by the SDK. This will allow your character's lips and tongue to move according to the words it is speaking.
Connect it to the SkinnedMeshRenderer on your character that you would like to drive.
Configure a BlendShapePrefix string in-case your mesh’s blend shape names do not match exactly to the specification. E.g. use prefix
hero_BSN. to drive
BlendFactor configures the blend between Animations driving the blend shapes and the SDKs blend shapes. A value between 0.6 and 0.9 is recommended.
Enable RecalculateCorrections to reset the correction shapes before rendering them. This should be enabled when you are using multiple animation layers additively or animations that do only drive the base shapes, but not the correction shapes of a mesh.
Configure the blend shape reset behaviour. When ResetToNeutral is enabled, the SDK driven blend shapes will be smoothly transitioned to 0 over ResetDuration, after not receiving frames from the SDK for the duration of the configured ResetDelay. This can be used to make the face of a character return to resting position, after speaking a very abruptly ending voice recording.
You can add multiple VoiceFrameAnimators if your character consists out of multiple meshes and you want to drive them all simultaneously. Configure each VoiceFrameAnimators to drive a different SkinnedMeshRenderer.
The included unit tests need the
ClientApplicationConfig to have valid Credentials for the Client Credentials flow. You can run the tests by opening
Window > General > Test Runner and triggering "Run All" in "PlayMode".
A valid partner license is required to use the realtime animation library inside your application. Such a license will be provided to you through a secure channel, once you sign up as a partner with us. The native binaries are 64 bit only and available for Windows 10, Android 9.0, iOS 12.3 and macOS 10.14. Make sure to switch your projects' build settings to produce a 64 bit build.
03_RealtimeDemo example scene shows how the native library can be integrated inside a Unity project to create realtime animations at runtime. The setup in terms of components is very similar to the basic version of the SDK. To get started follow these steps:
Open the scene
Add your license file to the project's
directory and link it on the
LoomTracker component on the
Character/Voice Hierarchy node.
Hit Unity's Play button.
Hit the "Record" button to start recording.
Hit "Stop Recording" or wait for the 30 seconds to end.
The Character in the scene should start to animate and talk back at you.
05_LiveMicDemo example scene shows how Unity's microphone can be connected to the realtime animation library. Remember to link the license file on the
LoomTracker component. Make sure to configure the AudioSettings correctly, when using this setup in your own project:
Default Speaker Mode should be "Mono"
DSP Buffer Size needs to be set to "Best Latency" or in case of
dropped audio samples to "Good Latency"