Show simple item record

dc.contributor.advisorKing, Simon
dc.contributor.advisorClark, Robert
dc.contributor.advisorYamagishi, Junichi
dc.contributor.authorWatts, Oliver Samuel
dc.date.accessioned2013-10-22T14:32:59Z
dc.date.available2013-10-22T14:32:59Z
dc.date.issued2013-07-02
dc.identifier.urihttp://hdl.handle.net/1842/7982
dc.description.abstractThis thesis introduces a general method for incorporating the distributional analysis of textual and linguistic objects into text-to-speech (TTS) conversion systems. Conventional TTS conversion uses intermediate layers of representation to bridge the gap between text and speech. Collecting the annotated data needed to produce these intermediate layers is a far from trivial task, possibly prohibitively so for languages in which no such resources are in existence. Distributional analysis, in contrast, proceeds in an unsupervised manner, and so enables the creation of systems using textual data that are not annotated. The method therefore aids the building of systems for languages in which conventional linguistic resources are scarce, but is not restricted to these languages. The distributional analysis proposed here places the textual objects analysed in a continuous-valued space, rather than specifying a hard categorisation of those objects. This space is then partitioned during the training of acoustic models for synthesis, so that the models generalise over objects' surface forms in a way that is acoustically relevant. The method is applied to three levels of textual analysis: to the characterisation of sub-syllabic units, word units and utterances. Entire systems for three languages (English, Finnish and Romanian) are built with no reliance on manually labelled data or language-specific expertise. Results of a subjective evaluation are presented.en_US
dc.contributor.sponsorEngineering and Physical Sciences Research Council (EPSRC)en_US
dc.language.isoenen_US
dc.publisherThe University of Edinburghen_US
dc.relation.hasversionO.Watts, J. Yamagishi, and S. King. The role of higher-level linguistic features in HMM-based speech synthesis. In Proc. Interspeech, pages 841-844, Makuhari, Japan, Sept. 2010a.en_US
dc.relation.hasversionO. Watts, J. Yamagishi, and S. King. Letter-based speech synthesis. In Proc. Speech Synthesis Workshop 2010, pages 317-322, Nara, Japan, Sept. 2010b.en_US
dc.relation.hasversionO. Watts, J. Yamagishi, and S. King. Unsupervised continuous-valued word features for phrase-break prediction without a part-of-speech tagger. In Proc. Interspeech, Florence, Italy, Aug. 2011.en_US
dc.relation.hasversionJ. Yamagishi and O. Watts. The CSTR/EMIME HTS System for Blizzard Challenge. In Proc. Blizzard Challenge 2010, Sept. 2010.en_US
dc.subjectunsupervised learningen_US
dc.subjectvector space modelen_US
dc.subjectspeech synthesisen_US
dc.subjectTTSen_US
dc.subjecttext-to-speechen_US
dc.titleUnsupervised learning for text-to-speech synthesisen_US
dc.typeThesis or Dissertationen_US
dc.type.qualificationlevelDoctoralen_US
dc.type.qualificationnamePhD Doctor of Philosophyen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record