Information Services banner Edinburgh Research Archive The University of Edinburgh crest

Edinburgh Research Archive >
Centre for Speech Technology Research >
CSTR publications >

Please use this identifier to cite or link to this item:

This item has been viewed 23 times in the last year. View Statistics

Files in This Item:

File Description SizeFormat
siggraph07.pdf455.22 kBAdobe PDFView/Open
Title: Speech-driven head motion synthesis based on a trajectory model.
Authors: Hofer, Gregor
Shimodaira, Hiroshi
Yamagishi, Junichi
Issue Date: 2007
Citation: Gregor Hofer, Hiroshi Shimodaira, and Junichi Yamagishi. Speech-driven head motion synthesis based on a trajectoy model. Poster at Siggraph 2007, 2007.
Abstract: Making human-like characters more natural and life-like requires more inventive approaches than current standard techniques such as synthesis using text features or triggers. In this poster we present a novel approach to automatically synthesise head motion based on speech features. Previous work has focused on frame wise modelling of motion [Busso et al. 2007] or has treated the speach data and motion data streams separately [Brand 1999], although the trajectories of the head motion and speech features are highly correlated and dynamically change over several frames. To model longer units of motion and speech and to reproduce their trajectories during synthesis, we utilise a promising time series stochastic model called ”Trajectory Hidden Markov Models” [Zen et al. 2007]. Its parameter generation algorithm can produce motion trajectories from sequences of units of motion and speech. These two kinds of data are simultaneously modelled by using a multistream type of the trajectory HMMs. The models can be viewed as a Kalman-smoother-like approach, and thereby are capable of producing smooth trajectories.
Keywords: speech technology
Appears in Collections:CSTR publications

Items in ERA are protected by copyright, with all rights reserved, unless otherwise indicated.


Valid XHTML 1.0! Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh 2013, and/or the original authors. Privacy and Cookies Policy