From Joyous to Clinically Depressed: Mood detection using multi-modal analysis of a person's appearance and speech [Affective Computing]
Presenter: Sharifa Alghowinem, PhD Student in the Australian National University
Abstract: Depression is listed as the fourth most significant cause of suffering and disability world wide, and they predicted that it will be the leading cause on 2020. Fortunately, this can be prevented if health professionals could be provided with suitable technology for detecting and diagnosing depression.
The goal of this research is to implement an objective affective sensing system that supports clinicians in their diagnosis of clinical depression from a person's body language and speech. In the long term, such a system may also become a very useful tool for remote depression monitoring to be used for doctor-patient communication in the context of an e-health infrastructure. Since body language and voice channels are complementary rather than redundant, fusing multimodal channels of emotion will improve depression detection. The methodology and current results for this research will be presented.
Bio: Sharifa Alghowinem, is a PhD student at the Australian National University, Computer Science Research School; her thesis is in the area of Affective Computing. She received her MSc in Software Engineering at University of Canberra, and her BSc in Computer Applications at King Saud University.
From 2004 to 2009, she worked as a computer science teacher and trainer. Since 2000, she developed several software projects and websites. She worked as a lecturer at University of Canberra in 2011 for Soft Computing unit. Her research interests include speech processing, computer vision, affective computing, and machine learning.
Date: April 1, 2013
Time: 12:00 - 1:00 pm
Location: Research Center Auditorium, Building 2, Malaz Campu