Show simple item record

dc.contributor.authorLivescu, Karen
dc.contributor.authorBezman, Ari
dc.contributor.authorBorges, Nash
dc.contributor.authorYung, Lisa
dc.contributor.authorÇetin, Ozgur
dc.contributor.authorFrankel, Joe
dc.contributor.authorKing, Simon
dc.contributor.authorMagimai-Doss, Mathew
dc.contributor.authorChi, Xuemin
dc.contributor.authorLavoie, Lisa
dc.date.accessioned2007-09-18T10:01:33Z
dc.date.available2007-09-18T10:01:33Z
dc.date.issued2007
dc.identifier.citationK. Livescu, A. Bezman, N. Borges, L. Yung, O. Çetin, J. Frankel, S. King, M. Magimai-Doss, X. Chi, and L. Lavoie. Manual transcription of conversational speech at the articulatory feature level. In Proc. ICASSP, Honolulu, April 2007en
dc.identifier.urihttp://hdl.handle.net/1842/1997
dc.description.abstractAlthough much is known about how speech is produced, and research into speech production has resulted in measured articulatory data, feature systems of different kinds and numerous models, speech production knowledge is almost totally ignored in current mainstream approaches to automatic speech recognition. Representations of speech production allow simple explanations for many phenomena observed in speech which cannot be easily analyzed from either acoustic signal or phonetic transcription alone. In this article, we provide a survey of a growing body of work in which such representations are used to improve automatic speech recognition.en
dc.format.extent192193 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.subjectspeech technologyen
dc.titleManual transcription of conversational speech at the articulatory feature levelen
dc.typeConference Paperen


Files in this item

This item appears in the following Collection(s)

Show simple item record