Show simple item record

dc.contributor.authorHofer, Gregor
dc.contributor.authorShimodaira, Hiroshi
dc.contributor.authorYamagishi, Junichi
dc.date.accessioned2007-09-19T12:43:03Z
dc.date.available2007-09-19T12:43:03Z
dc.date.issued2007
dc.identifier.citationGregor Hofer, Hiroshi Shimodaira, and Junichi Yamagishi. Lip motion synthesis using a context dependent trajectory hidden Markov model. Poster at SCA 2007, 2007en
dc.identifier.urihttp://hdl.handle.net/1842/2008
dc.description.abstractLip synchronisation is essential to make character animation believeable. In this poster we present a novel technique to automatically synthesise lip motion trajectories given some text and speech. Our work distinguishes itself from other work by not using visemes (visual counterparts of phonemes). The lip motion trajectories are directly modelled using a time series stochastic model called ”Trajectory Hidden Markov Model”. Its parameter generation algorithm can produce motion trajectories that are used to drive control points on the lips directly.en
dc.format.extent100434 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.subjectspeech technologyen
dc.titleLip motion synthesis using a context dependent trajectory hidden Markov modelen
dc.typeConference Paperen


Files in this item

This item appears in the following Collection(s)

Show simple item record