Show simple item record

Journal of Natural Language Engineering

dc.contributor.authorKallirroi, Georgila
dc.contributor.authorLemon, Oliver
dc.contributor.authorHenderson, James
dc.contributor.authorMoore, Johanna D.
dc.date.accessioned2010-10-28T10:18:46Z
dc.date.available2010-10-28T10:18:46Z
dc.date.issued2009-06
dc.identifier.issn1351-3249en
dc.identifier.urihttp://journals.cambridge.org/action/displayFulltext?type=1&fid=5654472&jid=NLE&volumeId=15&issueId=03&aid=5654464&bodyId=&membershipNumber=&societyETOCSession=en
dc.identifier.urihttp://hdl.handle.net/1842/4098
dc.description.abstractRichly annotated dialogue corpora are essential for new research directions in statistical learning approaches to dialogue management, context-sensitive interpretation, and context-sensitive speech recognition. In particular, large dialogue corpora annotated with contextual information and speech acts are urgently required. We explore how existing dialogue corpora (usually consisting of utterance transcriptions) can be automatically processed to yield new corpora where dialogue context and speech acts are accurately represented. We present a conceptual and computational framework for generating such corpora. As an example, we present and evaluate an automatic annotation system which builds ‘Information State Update’ (ISU) representations of dialogue context for the Communicator (2000 and 2001) corpora of human–machine dialogues (2,331 dialogues). The purposes of this annotation are to generate corpora for reinforcement learning of dialogue policies, for building user simulations, for evaluating different dialogue strategies against a baseline, and for training models for context-dependent interpretation and speech recognition. The automatic annotation system parses system and user utterances into speech acts and builds up sequences of dialogue context representations using an ISU dialogue manager. We present the architecture of the automatic annotation system and a detailed example to illustrate how the system components interact to produce the annotations. We also evaluate the annotations, with respect to the task completion metrics of the original corpus and in comparison to hand-annotated data and annotations produced by a baseline automatic system. The automatic annotations perform well and largely outperform the baseline automatic annotations in all measures. The resulting annotated corpus has been used to train high-quality user simulations and to learn successful dialogue strategies. The final corpus will be made publicly available.en
dc.language.isoenen
dc.publisherCambridge University Pressen
dc.titleAutomatic Annotation of Context and Speech Acts for Dialogue Corpora.en
dc.typeArticleen
dc.identifier.doi10.1017/S1351324909005105en
rps.issue3en
rps.volume15en
rps.titleJournal of Natural Language Engineeringen
dc.extent.pageNumbers315-353en
dc.date.updated2010-10-28T10:18:47Z
dc.identifier.eIssn1469-8110en


Files in this item

This item appears in the following Collection(s)

Show simple item record