Show simple item record

dc.contributor.advisorGoldwater, Sharon
dc.contributor.advisorJohnson, Mark
dc.contributor.authorJones, Bevan Keeley
dc.date.accessioned2016-07-20T14:11:58Z
dc.date.available2016-07-20T14:11:58Z
dc.date.issued2016-06-27
dc.identifier.urihttp://hdl.handle.net/1842/15959
dc.description.abstractThe cross-situational word learning paradigm argues that word meanings can be approximated by word-object associations, computed from co-occurrence statistics between words and entities in the world. Lexicon acquisition involves simultaneously guessing (1) which objects are being talked about (the ”meaning”) and (2) which words relate to those objects. However, most modeling work focuses on acquiring meanings for isolated words, largely neglecting relationships between words or physical entities, which can play an important role in learning. Semantic parsing, on the other hand, aims to learn a mapping between entire utterances and compositional meaning representations where such relations are central. The focus is the mapping between meaning and words, while utterance meanings are treated as observed quantities. Here, we extend the joint inference problem of word learning to account for compositional meanings by incorporating a semantic parsing model for relating utterances to non-linguistic context. Integrating semantic parsing and word learning permits us to explore the impact of word-word and concept-concept relations. The result is a joint-inference problem inherited from the word learning setting where we must simultaneously learn utterance-level and individual word meanings, only now we also contend with the many possible relationships between concepts in the meaning and words in the sentence. To simplify design, we factorize the model into separate modules, one for each of the world, the meaning, and the words, and merge them into a single synchronous grammar for joint inference. There are three main contributions. First, we introduce a novel word learning model and accompanying semantic parser. Second, we produce a corpus which allows us to demonstrate the importance of structure in word learning. Finally, we also present a number of technical innovations required for implementing such a model.en
dc.contributor.sponsorotheren
dc.language.isoenen
dc.publisherThe University of Edinburghen
dc.relation.hasversionB¨orschinger, B., Jones, B. K., and Johnson, M. (2011). Reducing grounded learning tasks to grammatical inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.en
dc.relation.hasversionJones, B. K., Johnson, M., and Frank, M. C. (2010). Learning words and their meanings from unsegmented child-directed speech. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 501–509.en
dc.subjectword learningen
dc.subjectsemantic parsingen
dc.subjectcomputational linguisticsen
dc.subjectcomputational modelingen
dc.subjectgraph grammaren
dc.subjectfrog storiesen
dc.subjectvariational Bayesen
dc.titleLearning words and syntactic cues in highly ambiguous contextsen
dc.typeThesis or Dissertationen
dc.type.qualificationlevelDoctoralen
dc.type.qualificationnamePhD Doctor of Philosophyen


Files in this item

This item appears in the following Collection(s)

Show simple item record