Information Services banner Edinburgh Research Archive The University of Edinburgh crest

Edinburgh Research Archive >
Centre for Speech Technology Research >
CSTR publications >

Please use this identifier to cite or link to this item: http://hdl.handle.net/1842/949

This item has been viewed 4 times in the last year. View Statistics

Files in This Item:

File Description SizeFormat
sap04-xtalk.pdf195.66 kBAdobe PDFView/Open
Title: Speech and crosstalk detection in multi-channel audio
Authors: Wrigley, Stuart N
Brown, Guy J
Wan, Vincent
Renals, Steve
Issue Date: 2005
Citation: IEEE Trans. on Speech and Audio Processing, 13:84-91, 2005.
Publisher: IEEE Signal Processing Society Press
Abstract: The analysis of scenarios in which a number of microphones record the activity of speakers, such as in a roundtable meeting, presents a number of computational challenges. For example, if each participant wears a microphone, it can receive speech from both the microphone's wearer (local speech) and from other participants (crosstalk). The recorded audio can be broadly classified in four ways: local speech, crosstalk plus local speech, crosstalk alone and silence. We describe two experiments related to the automatic classification of audio into these four classes. The first experiment attempted to optimise a set of acoustic features for use with a Gaussian mixture model (GMM) classifier. A large set of potential acoustic features were considered, some of which have been employed in previous studies. The best-performing features were found to be kurtosis, fundamentalness and cross-correlation metrics. The second experiment used these features to train an ergodic hidden Markov model classifier. Tests performed on a large corpus of recorded meetings show classification accuracies of up to 96%, and automatic speech recognition performance close to that obtained using ground truth segmentation.
Keywords: speech
Gaussian mixture model
Markov model
URI: http://hdl.handle.net/1842/949
Appears in Collections:CSTR publications

Items in ERA are protected by copyright, with all rights reserved, unless otherwise indicated.

 

Valid XHTML 1.0! Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh 2013, and/or the original authors. Privacy and Cookies Policy