Music and language share perceptual resources, and both map sound to invariant categories—invariant over and within speakers for language and over instruments and keys for music. The effects of stimulus variability on lexical tone and musical interval tasks among non-tone language speakers were compared using a matching (XAB) task under varying levels of stimulus variability. Listeners perceived Mandarin words better with single rather than multiple speakers and showed similar advantages in melodic interval perception for low (single instrument) versus high (multiple instruments) variability sets. Lexical tone and musical interval perception were affected similarly by increasing stimulus variability, on average. However, the magnitude of variability effects within subjects was not well correlated between the tasks, providing no evidence for shared category-mapping mechanism for the two domains. Instead, it suggests that crossover between tone and melody processing is driven by shared encoding of acoustic-phonetic features, and that differences in performance and learning by tone language speakers and musicians in the other domain represent progress along a phonetic–phonological–lexical continuum.
How about adding a README file? We currently accept plain text (README.txt) and markdown (README.md) files. Note: You may need to refresh to see an README file you just uploaded.