TY - JOUR
T1 - Parallel linear dynamic models can mimic the McGurk effect in clinical populations
AU - Altieri, Nicholas
AU - Yang, Cheng Ta
N1 - Funding Information:
The project described was supported by Grant No. (NIGMS) 5U54GM104944-03. Portions of this report, including the basic model set-up, appeared in the author’s Doctoral Dissertation and in Altieri (). Finally, we thank Ryan A. Stevenson for his data set.
Publisher Copyright:
© 2016, Springer Science+Business Media New York.
PY - 2016/10/1
Y1 - 2016/10/1
N2 - One of the most common examples of audiovisual speech integration is the McGurk effect. As an example, an auditory syllable /ba/ recorded over incongruent lip movements that produce “ga” typically causes listeners to hear “da”. This report hypothesizes reasons why certain clinical and listeners who are hard of hearing might be more susceptible to visual influence. Conversely, we also examine why other listeners appear less susceptible to the McGurk effect (i.e., they report hearing just the auditory stimulus without being influenced by the visual). Such explanations are accompanied by a mechanistic explanation of integration phenomena including visual inhibition of auditory information, or slower rate of accumulation of inputs. First, simulations of a linear dynamic parallel interactive model were instantiated using inhibition and facilitation to examine potential mechanisms underlying integration. In a second set of simulations, we systematically manipulated the inhibition parameter values to model data obtained from listeners with autism spectrum disorder. In summary, we argue that cross-modal inhibition parameter values explain individual variability in McGurk perceptibility. Nonetheless, different mechanisms should continue to be explored in an effort to better understand current data patterns in the audiovisual integration literature.
AB - One of the most common examples of audiovisual speech integration is the McGurk effect. As an example, an auditory syllable /ba/ recorded over incongruent lip movements that produce “ga” typically causes listeners to hear “da”. This report hypothesizes reasons why certain clinical and listeners who are hard of hearing might be more susceptible to visual influence. Conversely, we also examine why other listeners appear less susceptible to the McGurk effect (i.e., they report hearing just the auditory stimulus without being influenced by the visual). Such explanations are accompanied by a mechanistic explanation of integration phenomena including visual inhibition of auditory information, or slower rate of accumulation of inputs. First, simulations of a linear dynamic parallel interactive model were instantiated using inhibition and facilitation to examine potential mechanisms underlying integration. In a second set of simulations, we systematically manipulated the inhibition parameter values to model data obtained from listeners with autism spectrum disorder. In summary, we argue that cross-modal inhibition parameter values explain individual variability in McGurk perceptibility. Nonetheless, different mechanisms should continue to be explored in an effort to better understand current data patterns in the audiovisual integration literature.
KW - Audiovisual integration
KW - McGurk effect
KW - Parallel interactive linear dynamic model
UR - http://www.scopus.com/inward/record.url?scp=84983784768&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84983784768&partnerID=8YFLogxK
U2 - 10.1007/s10827-016-0610-z
DO - 10.1007/s10827-016-0610-z
M3 - Article
C2 - 27272510
AN - SCOPUS:84983784768
SN - 0929-5313
VL - 41
SP - 143
EP - 155
JO - Journal of Computational Neuroscience
JF - Journal of Computational Neuroscience
IS - 2
ER -