Quick Start Guide

From Fiona Docs
Jump to: navigation, search

Contents

Creating the Avatar

This document provides a quick and easy tutorial for creating an speaking avatar. All topics not covered here are addressed in Fiona's User Guide.

You can watch some screencasts at Fiona's YouTube Channelthat will help you to start using the Sparklink.

There a lot of possibilities to create your avatar. In this tutorial, we are going to show you how to make your avatar start speaking when you launch it.

First, go to the SparkLink Zone to start creating your avatar!

To learn more about Fiona's features and tools go to the User Guide

Step 1: Select the Sparks

The first thing you need to begin building your avatar is the 3D Character, which is the main spark. Go to the Avatar Builder, then click on Sparks and select 3D Character and drop it on the Linking Zone.

Then you will need a spark that allows your avatar to begin talking when you launch it (Voice Start) and another one that is able to understand the text you want your avatar to say (Festival TTS).

Drag and drop these sparks into the Linking Zone.

Sparks



Step 2: Link the Sparks

You already have the sparks you want to link at the Linking Zone. Now you have to connect them with the interfaces we provide you. There are 2 types of interfaces: Asking Interfaces (green) and Answering Interfaces (pink).

For your avatar to speak you need 3 things:

  1. Text: You must give your avatar something to say.
  2. Lips movement: It would be weird if you hear the avatar speaking but not moving its mouth.
  3. Audio and sound: It would be weird if you see the avatar moving its mouth, but you don't hear anything.
  4. Launching Process: You need something that tells when the avatar should start speaking.

Let's explain each of these points.

1. Text

If you want your avatar to speak, you need to give it something to say. If you click on the Voice Start spark you can change the text on Properties. Introduce the text you want. We are going to use an easy one.

Properties

Now that you have the text you need something that is able to interpret this text. That is the Festival TTS.

As you see in the picture below, there are 4 interfaces allowed for this spark. It must be an Asking Interface, so it must be a green one.

VoiceStart

The IFlowVoice interface is the one that that enables communication with a TextToSpeech. That’s the one you need! Pick it!

LinkingIFlowVoice

Now you need an Answering Interface (pink). So let’s see what interfaces the Festival TTS allows you to use.

TTS interfaces

It seems only two interfaces are allowed and one of them has the same icon as the IFlowVoice. So that’s the one you need! Connect the interfaces and you would have the first thing solved. Your TTS already knows what you want your avatar to say.

TTS - VoiceStart Interfaces

2. Lips movement

It seems right to think that the movement of the lips has something to do with the 3D Character. The other spark involved here is Voice Start.

An Asking Interface (green) should come from Voice Start and an Answering Interface should come from the 3D Character.

IFaceExpression seems the right one, so we drop both of them into the Linking Zone and link them.

FaceExpression

Now the Voice Start is telling the 3D Character that the avatar must move its lips while speaking.

Although it seems everything is correct, there is something missing. We need something that tells the 3D Character that it should update the frames while the text is been said. So right now we need the IMovement to connect the Voice Start with the 3D Character.

Probably in the near future the IMovement Interface will be included in the IFaceExpression Interface, so you don’t have to use 2 different interfaces.

Check RemoteCharacter3D configuration and make sure that the configuration parameter "CameraControlConnected" is right for your configuration. This Spark has default camera settings, so if you don't connect the "CameraControlSpark" set the parameter to false.

CameraControlConnected


3. Audio & Sound

Now we need the sound, which will be provided by the Festival TTS and it should be send to the 3D Character. When we click on Festival TTS to see what interfaces are allowed, we notice that there are only Answering Interfaces (pink).

In this particular case, the interface we are going to use is IAudioQueue.

We do the same we did before and select and link these interfaces.

AudioManager

4. Camera Control (optional)

If you want to, you can add some control over the avatar position by moving the camera point. To do this, just drag CameraControlSpark and connect the ICamera interface with RemoteCharacter3D. Just remember to set the "CameraControlConnected" parameter to true so the Camera settings take effect.

CameraControlConnected

5. Launching Process

This is the last step!

As the main spark is the 3D Character this is the one that will tell the avatar when to start. There is only one interface that allows you to control the voice. That is IVoiceControl

The Festival TTS doesn’t allow this interface, but the Voice Start does, so you should connect the 3D Character to Voice Start with the IVoiceControl Interfaces as you see in the picture and that's it!

IVoiceControl

Step 3: See the result

Remember to save! To save your work click in the button Publish at the botton left of the screen.

Publish Button


Now go to your avatar and enjoy!

Adding functionality: Face Tracking

Now that you can see and configure the basic avatar, it's time to add some functionality. In this section we will add a face tracking capability to our avatar.

We will do this taking as the start point the configuration shown in the step 3 of the previous section.

Step 1: Select the Sparks

For the avatar to be able to track our face we will need three sparks: EyeContact, AVInput and FaceTracker.

AudioConsumer and AVInput will link the user's stream from the webcam to the avatar, then this stream is taken by the FaceTracker wich after some computing will send instructions to the EyeContact Spark that will command the 3D model to move.

So drag and drop this three Sparks into the linking zone.

Sparks for face tracking

Step 2: Link the Sparks

Now, we are going to indicate those sparks how to communicate through the interfaces. These are the three connections:

IEyes: Click on EyeContact Spark and then click it's interface IEyes, this goes to RemoteCharacter3D Spark, so click that Spark too, look for the interface and connect them.

INeck: Click on EyeContact Spark and then click it's interface INeck, this goes to RemoteCharacter3D Spark, so click that Spark too, look for the interface and connect them.

IFlowImage: This Interface takes the image stream from the user and passes it to the FaceTrackerSpark for the frame to be processed.

IDetectedFacePosition: Finally, the detected face position is linked with the rest of the avatar. For this job, IDetectedFacePosition going from FaceTrackerSpark to EyeContact will do the job.

Interface connections for face tracking

And... it's done. Just save and test the face tracking

Personal tools
Namespaces

Variants
Actions
Main
Documents
Management
Toolbox