Show simple item record

In Proc. Interspeech, pages 1829-1832, Brisbane, Australia, September 2008.

dc.contributor.authorRenals, Steve
dc.contributor.authorYamagishi, Junichi
dc.contributor.authorRichmond, Korin
dc.contributor.authorCabral, Joao P
dc.date.accessioned2010-10-05T09:35:17Z
dc.date.available2010-10-05T09:35:17Z
dc.date.issued2008en
dc.identifier.urihttp://hdl.handle.net/1842/3832
dc.description.abstractThis paper presents a method to control the characteristics of synthetic speech flexibly by integrating articulatory features into a Hidden Markov Model (HMM)-based parametric speech synthesis system. In contrast to model adaptation and interpolation approaches for speaking style control, this method is driven by phonetic knowledge, and target speech samples are not required. The joint distribution of parallel acoustic and articulatory features considering cross-stream feature dependency is estimated. At synthesis time, acoustic and articulatory features are generated simultaneously based on the maximum-likelihood criterion. The synthetic speech can be controlled flexibly by modifying the generated articulatory features according to arbitrary phonetic rules in the parameter generation process. Our experiments show that the proposed method is effective in both changing the overall character of synthesized speech and in controlling the quality of a specific vowel.en
dc.titleGlottal Spectral Separation for Parametric Speech Synthesisen
dc.typeConference Paperen
rps.titleIn Proc. Interspeech, pages 1829-1832, Brisbane, Australia, September 2008.en
dc.date.updated2010-10-05T09:35:17Z
dc.date.openingDate2008-09


Files in this item

This item appears in the following Collection(s)

Show simple item record