Thousands of Voices for HMM-Based Speech Synthesis-Analysis and Application of TTS Systems Built on Various ASR Corpora
IEEE Transactions On Audio Speech and Language Processing
MetadataShow full item record
In conventional speech synthesis, large amounts of phonetically balanced speech data recorded in highly controlled recording studio environments are typically required to build a voice. Although using such data is a straightforward solution for high quality synthesis, the number of voices available will always be limited, because recording costs are high. On the other hand, our recent experiments with HMM-based speech synthesis systems have demonstrated that speaker-adaptive HMM-based speech synthesis (which uses an "average voice model" plus model adaptation) is robust to non-ideal speech data that are recorded under various conditions and with varying microphones, that are not perfectly clean, and/or that lack phonetic balance. This enables us to consider building high-quality voices on "non-TTS" corpora such as ASR corpora. Since ASR corpora generally include a large number of speakers, this leads to the possibility of producing an enormous number of voices automatically. In this paper, we demonstrate the thousands of voices for HMM-based speech synthesis that we have made from several popular ASR corpora such as the Wall Street Journal (WSJ0, WSJ1, and WSJCAM0), Resource Management, Globalphone, and SPEECON databases. We also present the results of associated analysis based on perceptual evaluation, and discuss remaining issues.
Showing items related by title, author, creator and subject.
Scenario based approach to speech-enabled computer assisted language learning based on automated speech recognition and virtual reality graphics Morton, Hazel (The University of Edinburgh, 2007)
Speaker adaptation and the evaluation of speaker similarity in the EMIME speech-to-speech translation project Wester, Mirjam; Dines, John; Gibson, Matthew; Liang, Hui; Wu, Yi-Jian; Saheer, Lakshmi; King, Simon; Oura, Keiichiro; Garner, Philip N.; Byrne, William; Guan, Yong; Hirsimäki, Teemu; Karhila, Reima; Kurimo, Mikko; Shannon, Matt; Shiota, Sayaka; Tian, Jilei; Tokuda, Keiichi; Yamagishi, Junichi (7th ISCA Speech Synthesis Workshop, 2010-09)This paper provides an overview of speaker adaptation research carried out in the EMIME speech-to-speech translation (S2ST) project. We focus on how speaker adaptation transforms can be learned from speech in one language ...
Andersson, Sebastian; Yamagishi, Junichi; Clark, Robert (2010)Spontaneous conversational speech has many characteristics that are currently not well modelled in unit selection and HMM-based speech synthesis. But in order to build synthetic voices more suitable for interaction we need ...