Springer, 2016. — 301.
This book is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of these people and the environment and react to various situations. The book explains the main techniques for the tracking and the analysis of humans and their behaviour including facial expressions, body and hand gestures and sound localization. It explains how the virtual human and the social robot should react at the right time based on the perception they have from the real participants. It describes how to create socially interactive behaviour generation for virtual characters and social robots using the same modalities as human do: verbal, body gestures, facial expressions and gaze. The book also discusses how a virtual human or a social robot can replace a real human in a remote location
Part I User Understanding Through Multisensory PerceptionFace and Facial Expressions Recognition and Analysis
Body Movement Analysis and Recognition
Sound Source Localization and Tracking
Part II Facial and Body Modeling AnimationModeling Conversation
Personalized Body Modeling
Parameterized Facial Modeling and Animation
Motion-Based Learning
Responsive Motion Generation
Shared Object Manipulation
Part III Modeling Human BehavioursModeling Personality, Mood, and Emotions
Motion Control for Social Behaviors
Multiple Virtual Human Interactions
Multimodal and Multi-party Social Interactions