Emotions, actions, and poses can used to interface with your bot's avatar.
Avatars are a collection of media files (images, video, audio) that have been tagged with emotions, actions, and poses. You can choose your bot's avatar from its Avatar page in its Admin Console. You can choose any existing avatar from our avatar library, or create your own.
You can tag your bot's responses with emotions, actions, and poses from your bot's "Training & Chat Logs" page in its Admin Console. When you tag a response with an emotions, your bot's emotional state will be influenced by that emotions, and the emotion will be reflected in its avatar, if its avatar contains a media file tagged with that emotion. You can also tag a response with an action or poses, and that will be reflected in the bot's avatar.
A bot's avatar can either be an image avatar, or an animated video avatar. A bot's response can have three parts, the action, the talk animation, and the pose animation. For a video avatar, the action video will be played first, if the response was tagged with an action. The avatar can also have an audio file tagged with the action. For example the action "laugh" could have a mp4 video of the avatar laughing, and an wav audio of laughter.
The avatar's "talking" video will be played next, in sync with the bot's speech. You can have multiple talking video files tagged with different emotions or poses, so the talking video can match the emotion or pose video.
The pose video is played last in a loop until the bot's next response. If the response was tagged with a pose, the matching pose video from the avatar will be played. If there was no pose, or no pose video available, the video tagged with the bot's current emotion will be used. If the bot is not expressing an emotion, then the avatar's default video is used (video with no tags).
When a pose tag is used, the bot will hold that pose until the next response also tagged with a pose. The "default" can be used to reset the bot's pose. A pose can also tag a background audio file. For example you could tag the response to "dance" with the "dancing" pose, and tag a dancing webm video, and a mp3 dance music audio with "dancing" to make your bot dance.
For image avatars each response will only result in one image. The image will first match the action, if no actions, then the pose, or emotion, otherwise the avatar's default image.
You can also use "actions:", "emotions:" or "poses:" in a response list file.
Do you like me?
Yes, I like you a lot.
For more info on avatars see,