Training A Neurocontrol For Talking Heads

Prof. Max H. Garzon
 

Division of Computer Science, The University of Memphis

2003. 1. 21

 
Talking heads are anthropomorphic representations of a software agent used to facilitate interaction with human users. Talking heads have been commonly programmed and controlled by ontologies designed according to intuitive and heuristic considerations that may have little to do with the applications at hand, and are thus probably not truly expressive, meaningful or ergonomic to human users. In previous work we have successfully trained a recurrent neurocontrol to generate facial expressions of a pedagogical agent (World Congress in Computational Intelligence WCCI-02, Hawaii). Here we describe successful training to enable generation of arm gestures to enhance the nonverbal behavior of the facial expressions in order to convey meaningful emotional content to users in the context of tutoring sessions on a particular domain (computer literacy) on a continuous scale of negative, neutral, and positive feedback. The neurocontrol is a recurrent neural network that autonomously synchronizes the movements of facial features in lips, eyes, eyebrows, and arms in order to produce facial animations that are not only valid and meaningful to untrained human users, but also can easily interface with semantic processing modules of larger agents that operate in real-time, such as tutoring systems.

 

This page is maintained by Ho-Jin Chung (hjchung@bi.snu.ac.kr).
Last update: Jan. 21, 2003