Workshops and tutorials will be held on April 22 and April 26.
[ Workshops | Tutorials ]
The submission deadline for workshop proposals has passed
The following workshops will take place at FG 2013:
1. EmoSPACE 2013 - the 2nd Int'l Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space
Time: 8:30am - 4.00 pm, April 26, 2013 (Friday)
The key aim of EmoSPACE’13, the second workshop in the series, is to present cutting-edge research and new challenges in automatic and continuous analysis and synthesis of human affective and social behaviour in time and/or space in an interdisciplinary forum of affective and behavioural scientists. More specifically, the workshop aims (i) to bring forth existing efforts and major accomplishments in modelling, analysis and synthesis of affective and social behaviour in continuous time and/or space, (ii) while encouraging the design of novel applications in context as diverse as human-computer and human-robot interaction, clinical and biomedical studies, learning and driving environments, and entertainment technology, and (iii) to focus on current trends and future directions in the field.
2. Vision(s) on Deception and Non-cooperation
Time: 9:00am - 3.15 pm, April 26, 2013 (Friday)
Natural behavior includes deceptive and non-cooperative behavior. There are many applications where detection and generation of such behavior is useful. In particular when we have smart environments inhabited by tangibles, social robots and virtual humans. Domains for research on detecting and generating deceptive and non-cooperative nonverbal behavior are the following: (1) understanding and processing face-to-face communication and multi-party conversations, (2) understanding human behavior in natural (sensor-equipped) physical environments, (3) educational and training environments that aim at behavioral change, (4) play, games and sports. The focus of the workshop is on detection, and using computer vision is the starting point. But it is well known that there are no uni-modal cues from which deception can be established reliably. For that reason there is particular interest in computer vision integrated in a multimodal approach. Due to the complexity of the field we are also interested in model-based attempts to generate deceptive and non-cooperative behavior.
3. 3D Face Biometrics
Time: 8:45am -12:45 pm, April 22, 2013 (Monday)
Advances during the last decades have made high quality acquisition of 3D faces a reality. 3D scanning devices are now available not only in the form of Hi-res scanner which are able to acquire registered texture and range data from static environments but also 4D scanning devices that are able to acquire range data over time so as to capture the dynamics of 3D. This workshop focuses on 3D face analysis and recognition and is particularly aimed at exploring ways to extract and effectively exploit still as well as dynamic facial features for recognition. Areas of coverage include, but are not limited to, 3D face detection, analysis, and recognition; gender, ethnicity and age classification from 3D data; analysis of facial expressions; multimodal 2D/3D face recognition; super-resolution facial models.
The submission deadline for tutorial proposals has passed
Understanding human actions with 2D and 3D sensors
Time: 9:00AM – 12:20 PM, April 22, 2013 (Monday)
Zicheng Liu, Microsoft Research Redmond, USA, email@example.com
Junsong Yuan, Nanyang Technological University, Singapore, firstname.lastname@example.org
This tutorial aims to provide a systematic review of the existing work in action understanding and
in-depth treatment of the recently developed state-of-the-art approaches to the face and gesture community, and
to help foster a larger research community and stimulate greater research effort around the topic. Case studies will be provided to show how the action understanding techniques can be used in human-computer
interaction, multimedia information retrieval, intelligent video surveillance etc., and new research
directions will be discussed.
9:00AM - 10:30AM - Part 1: Understanding Human Actions in Videos
10:30AM-10:50AM - Break
10:50AM -12:20 PM - Part 2: Understanding Human Actions with 3-D sensors