Parallel linear dynamic models can mimic the McGurk effect in clinical populations

Nicholas Altieri, Cheng Ta Yang

研究成果: 雜誌貢獻文章同行評審

5 引文 斯高帕斯(Scopus)

摘要

One of the most common examples of audiovisual speech integration is the McGurk effect. As an example, an auditory syllable /ba/ recorded over incongruent lip movements that produce “ga” typically causes listeners to hear “da”. This report hypothesizes reasons why certain clinical and listeners who are hard of hearing might be more susceptible to visual influence. Conversely, we also examine why other listeners appear less susceptible to the McGurk effect (i.e., they report hearing just the auditory stimulus without being influenced by the visual). Such explanations are accompanied by a mechanistic explanation of integration phenomena including visual inhibition of auditory information, or slower rate of accumulation of inputs. First, simulations of a linear dynamic parallel interactive model were instantiated using inhibition and facilitation to examine potential mechanisms underlying integration. In a second set of simulations, we systematically manipulated the inhibition parameter values to model data obtained from listeners with autism spectrum disorder. In summary, we argue that cross-modal inhibition parameter values explain individual variability in McGurk perceptibility. Nonetheless, different mechanisms should continue to be explored in an effort to better understand current data patterns in the audiovisual integration literature.

原文英語
頁(從 - 到)143-155
頁數13
期刊Journal of Computational Neuroscience
41
發行號2
DOIs
出版狀態已發佈 - 10月 1 2016
對外發佈

ASJC Scopus subject areas

  • 感覺系統
  • 認知神經科學
  • 細胞與分子神經科學

指紋

深入研究「Parallel linear dynamic models can mimic the McGurk effect in clinical populations」主題。共同形成了獨特的指紋。

引用此