Music and language share perceptual resources, and both map sound to invariant categories—invariant over and within speakers for language and over instruments and keys for music. The effects of stimulus variability on lexical tone and musical interval tasks among non-tone language speakers were compared using a matching (XAB) task under varying levels of stimulus variability. Listeners perceived Mandarin words better with single rather than multiple speakers and showed similar advantages in melodic interval perception for low (single instrument) versus high (multiple instruments) variability sets. Lexical tone and musical interval perception were affected similarly by increasing stimulus variability, on average. However, the magnitude of variability effects within subjects was not well correlated between the tasks, providing no evidence for shared category-mapping mechanism for the two domains. Instead, it suggests that crossover between tone and melody processing is driven by shared encoding of acoustic-phonetic features, and that differences in performance and learning by tone language speakers and musicians in the other domain represent progress along a phonetic–phonological–lexical continuum.
How about adding a README file? We currently accept plain text (README.txt) and markdown (README.md) files. Note: You may need to refresh to see an README file you just uploaded.
Items in this Work
|User Evan David Bradley has updated A Comparison of Stimulus Variability in Lexical Tone and Melody Perception||6 months ago|
|The derivative for Bradley_PsychReports_2017.pdf was not successfully created||6 months ago|
|User Evan David Bradley has attached Bradley_PsychReports_2017.pdf to Bradley_PsychReports_2017.pdf||6 months ago|
|User Evan David Bradley has deposited Bradley_PsychReports_2017.pdf||6 months ago|