Show simple item record

dc.contributor.advisorKeller, Frank
dc.contributor.advisorLapata, Maria
dc.contributor.authorGella, Spandana
dc.date.accessioned2019-07-08T12:22:41Z
dc.date.available2019-07-08T12:22:41Z
dc.date.issued2019-07-01
dc.identifier.urihttp://hdl.handle.net/1842/35702
dc.description.abstractEvery day billions of images are uploaded to the web. To process images at such a large scale it is important to build automatic image understanding systems. An important step towards understanding the content of the images is to be able to understand all the objects, scenes and actions depicted in the image. These systems should be capable of integrating with natural language or text to be able to query and interact with humans for tasks such as image retrieval. Verbs play a key role in the understanding of sentences and scenes. Verbs express the semantics of an actions as well as the interactions between objects participating in an event. Thus understanding verbs is central to both language and image understanding. However, verbs are known for their variability in meaning with context. Many studies in psychology have shown that contextual information plays an important role in semantic understanding and processing in the human visual system. We use this as intuition and understand the role of textual or visual context in tasks that combine language and vision. Our research presented in this thesis focuses on the problems of integrating visual and textual contexts for: (i) automatically identifying verbs that denote actions depicted in the images; (ii) fine-grained analysis of how visual context can help disambiguate different meanings of verbs in a language or across languages; (iii) the role played by the visual and multilingual context in learning representations that allow us to query information across modalities and languages. First, we propose the task of visual sense disambiguation, an alternative way of addressing the action recognition task. Instead of identifying the actions directly, we develop a two step process: identifying the verb that denotes the action being depicted in an image and then disambiguate the meaning of the verb based on the visual and textual context associated with the image. We first build a image-verb classifier based on the weak signal from image description data and analyse the specific regions that model focuses on while predicting the verb. We then disambiguate the meaning of the verb shown in the image using image features and sense-inventories. We test the hypothesis that visual and textual context associated with the image contribute to the disambiguation task. Second, we ask whether the predictions made by such models correspond to human intuitions about visual verbs or actions. We analyse whether the image regions a verb prediction model identifies as salient for a given verb correlate with the regions fixated by human observers performing an action classification task. We also compare the correlation of human fixations against visual saliency and center bias models. Third, we propose the crosslingual verb disambiguation task: identifying the correct translation of the verb in a target language based on visual context. This task has the potential to resolve lexical ambiguity in machine translation when the visual context is available. We propose a series of models and show that multimodal models that fuse textual information with visual features have an edge over text or visual only models. We then demonstrate how visual sense disambiguation can be combined with lexical constraint decoding to improve the performance of a standard unimodal machine translation system on image descriptions. Finally, we move on to learn joint representations for images and text in multiple languages. We test the hypothesis that context provided as visual information or text in other language contributes to better representation learning. We propose models to map text from multiple languages and images into a common space and evaluating the usefulness of the second language in multimodal search and usefulness of image in the crosslingual search. Our experiments suggest that exploiting multilingual and multimodal resources can help in learning better semantic representations that are useful for various multimodal natural language understanding tasks. Our experiments on visual sense disambiguation, sense disambiguation across languages, multimodal and cross-lingual search demonstrate that visual context alone or combined with textual context is useful for enhancing multimodal and crosslingual applications.en
dc.language.isoenen
dc.publisherThe University of Edinburghen
dc.relation.hasversionGella, Spandana and Keller, Frank. An analysis of action recognition datasets for language and vision tasks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Volume 2: Short Papers, pp. 64–71, 2017.en
dc.relation.hasversionGella, Spandana, Lapata, Mirella, and Keller, Frank. Unsupervised visual sense disambiguation for verbs using multimodal embeddings. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pp. 182–192, 2016.en
dc.relation.hasversionGella, Spandana, Sennrich, Rico, Keller, Frank, and Lapata, Mirella. Image pivoting for learning multilingual multimodal representations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing: Short Papers, pp. 2829– 2835, Copenhagen, 2017en
dc.subjectvisual and textual contexten
dc.subjectlearning representationsen
dc.subjectartificial intelligenceen
dc.subjectverbsen
dc.subjectvisual sense disambiguationen
dc.subjectimage-verb classifieren
dc.subjectpredictionsen
dc.subjectimage descriptionsen
dc.titleVisual context for verb sense disambiguation and multilingual representation learningen
dc.typeThesis or Dissertationen
dc.type.qualificationlevelDoctoralen
dc.type.qualificationnamePhD Doctor of Philosophyen


Files in this item

This item appears in the following Collection(s)

Show simple item record