Imagining and anticipating another speaker’s utterances in joint language tasks
MetadataShow full item record
There is substantial evidence that comprehenders predict language. In addition, dialogue partners seem to predict one another, as shown by well-timed turn-taking and by the fact that they can complete one another’s utterances. However, little is known about the mechanisms that (i) support the ability to form predictions of others’ utterances and (ii) allow such predictions to be integrated with representations of one’s own utterances. I propose (predictive) representations of others’ utterances are computed within a cognitive architecture that makes use of mechanisms routinely used in language production (i.e., for the representation of one’s own utterances). If this proposal is right, representing that another person is about to speak (and, possibly, representing what they are about to say) should affect the process of language production, as the two processes are based on overlapping mechanisms. I test this hypothesis in a series of novel joint language tasks. Psycholinguistic tasks (picture naming and picture description) that have traditionally been used to study individual language production are distributed across two participants, who either produce two utterances simultaneously or consecutively. In addition, solo versions of the same tasks (where only one participant speaks, while the other participant remains silent) are tested. Speech onset latencies and utterance duration measures are compared between the solo and the joint task. In a first set of experiments about simultaneous production, I show that participants take longer to name pictures when they believe that their partner is concurrently naming pictures than when they believe their partner is silent or is concurrently categorizing the pictures as being from the same or from different semantic categories. Second, I show that participants find it harder to stop speaking when they know that their partner is about to speak. These findings suggest that speakers are able to represent that another person is about to speak using some of the same mechanisms they use to produce language. However, in a third series of experiments, I show that participants do not routinely anticipate the content and timing of another person’s utterance in a way that affects concurrent production of utterances. In light of this evidence, I discuss the proposal that speakers use language production mechanisms to represent and anticipate their partner’s utterances and support coordination in dialogue.