You are here

Title: Hearing by seeing: can improving the visibility of the speaker's lips makes you hear better?

Speaker: Najwa AlGhamdi , PhD Candidate in University of Sheffield (Lecturer in the Information Technology Department, CCIS, KSU.)

Synopsis:

The intelligibility of visual speech can be affected by a number of facial visual signals, e.g. lip emphasis, teeth and tongue visibility, and facial hair. Our research focuses on improving the lip visibility of a speaker in a video used for auditory training and studies the implication of this improvement on the training gain.  The seminar will highlight an experiment we did that used spectrally-distorted speech, i.e cochlear implant simulated speech, to train groups of non-native, English-speaking Saudi listeners using three different forms of speech: audio-only, audiovisual, and enhanced audiovisual, which is achieved by artificially colouring the lips of the speaker to improve lip visibility. The reason for using spectrally-distorted speech is that the longer term aim of our work is to employ these ideas in a training system for hearing-impaired users, in particular cochlear-implant users. Our initial work uses non-native Saudi listeners based on the assumption that their reduced processing abilities for native speech  can be compared to the reduced processing abilities of cochlear implant users as a result of the inherent noise in the processing of sound by a cochlear implant. The results suggest that using enhanced audiovisual speech during auditory training improves the training gain when subsequently listening to audio-only spectrally-distorted speech. The results also suggest that spectrally-distorted speech intelligibility during training is improved when an enhanced visual signal is used.

 

Mrs. Najwa Alghamdi:

Is a lecturer in The Information Technology Department in King Saud University. She is now a research student in the Virtual Reality, Graphics and Simulation Lab in the Department of Computer Science at the University of Sheffield. Her research aims to enhance the visual speech used in auditory training for cochlear implant (CI) users, by investigating techniques to artificially alter the user’s visual appearance. Her research interests includes computer vision and visual speech processing. 

Date/Time: Tuesday, April 5, 2016 at 12:00pm

Location: Khadija Auditorium, F49 in Building 6  - Broadcast to Room 2090 in CCIS Building 31