How speaking and listening relate to each other in a dialogue: An electrophysiological study
Dialogue is a common form of everyday language use. As one individual expresses their thoughts through speech, the other listens to that output for comprehension. Throughout the dialogue, this pattern reverses continually. Thus, every speaker is also a listener. Yet, psycholinguistic models typically focus on monologue situations only (e.g., Levelt, 1989), or when address dialogue (Pickering & Garrod, 2004), consider the speaker and listener to be separate individuals (e.g., Yoon & Brown-Schmidt, 2013). This project focuses on the fact that the same individual acts both as a speaker and listener in dialogue, and compares the relationship between production and comprehension within the same individual. The Interactive Alignment account (Pickering & Garrod, 2004) states that dialogue is a joint activity in which interlocutors align their mental representations. They assume linguistic representations are shared across production and comprehension, which facilitates alignment across speakers: as one is engaged in production, the other is engaged in comprehension, and the situation reverses on each dialogue turn. Considerable research has addressed this alignment across speakers. However, it remains silent on an implicit corollary of this model: that production and comprehension also become more aligned within an individual (who is both a speaker and a listener). Here, I will address the novel theoretical question of how speaking and listening relate to each other within the same individual in a dialogue. We will compare how monologue and dialogue contexts influence an individual’s comprehension and production using cross-modal syntactic priming (Bock, 1986). In priming, the processing of a target item is facilitated by having recently encountered a similar item. Priming occurs for syntactic structures with multiple options, such as describing an event using either the active (The girl throws the ball) or passive (The ball is thrown by the girl) voice. In the production-to-comprehension task, participants produce a prime sentence and then hear a target sentence. Comprehension is measured using event-related potentials to obtain a fine-grained measure of neural processing. In the comprehension-to-production task, participants will hear a prime sentence and then produce a target sentence. Production is measured in three ways: syntactic choice (active or passive), response time, and average syllable duration (Quené, 2008). English monolinguals will perform the task alone (monologue) or interact with a confederate (dialogue), whose utterances are pre-recorded, but who will appear to speak in real-time. Data collection is ongoing and preliminary results will be reported. I expect significant cross-modal priming in both monologue and dialogue if production and comprehension are aligned within an individual. For production-to-comprehension, primed, as compared to unprimed, sentences should result in a reduced P600 effect (Tooley et al., 2009). For comprehension-to-production, primed sentences should result in syntactic choices aligning with the prime type, and faster response times and average syllable durations. The inter-individual alignment in dialogue, as suggested by the Interactive Alignment model, should facilitate within-individual alignment between production and comprehension. Therefore, stronger cross-modal priming should be found in dialogue than monologue. However, if inter-individual alignment does not affect within-individual alignment, priming should be equivalent in dialogue and monologue.
Penn State Only
Files are only accessible to users logged-in with a Penn State Access ID.
|How speaking and listening relate to each other in a dialogue: An electrophysiological study
|All rights reserved
|February 16, 2016
This resource is currently not in any collection.