Information Services banner Edinburgh Research Archive The University of Edinburgh crest

Edinburgh Research Archive >
Centre for Speech Technology Research >
CSTR publications >

Please use this identifier to cite or link to this item:

This item has been viewed 18 times in the last year. View Statistics

Files in This Item:

File Description SizeFormat
elitist-final-specom.pdf375.47 kBAdobe PDFView/Open
Title: An elitist approach to automatic articulatory-acoustic feature classification for phonetic characterization of spoken language.
Authors: Chang, Shuangyu
Wester, Mirjam
Greenberg, Steven
Issue Date: 2005
Citation: Speech Communication, 47:290-311, 2005.
Publisher: Elsevier
Abstract: A novel framework for automatic articulatory-acoustic feature extraction has been developed for enhancing the accuracy of place- and manner-of-articulation classification in spoken language. The elitist approach provides a principled means of selecting frames for which multi-layer perception, neural-network classifiers are highly confident. Using this method it is possible to achieve a frame-level accuracy of 93% on elitist frames for manner classification on a corpus of American English sentences passed through a telephone network (NTIMIT). Place-of-articulation information is extracted for each manner class independently, resulting in an appreciable gain in place-feature classification relative to performance for a manner-independent system. A comparable enhancement in classification performance for the elitist approach is evidenced when applied to a Dutch corpus of quasi-spontaneous telephone interactions (VIOS). The elitist framework provides a potential means of automatically annotating a corpus at the phonetic level without recourse to a word-level transcript and could thus be of utility for developing training materials for automatic speech recognition and speech synthesis applications, as well as aid the empirical study of spoken language.
Keywords: speech
Articulatory features
Speech analysis
Multilingual phonetic classification
Automatic phonetic classification
Appears in Collections:CSTR publications

Items in ERA are protected by copyright, with all rights reserved, unless otherwise indicated.


Valid XHTML 1.0! Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh 2013, and/or the original authors. Privacy and Cookies Policy