We ask whether auditory and visual signals are processed using a consistent mental architecture across variable experimental designs. It is well-known that in an auditory-visual task requiring divided attention, responses are often faster for redundant audiovisual targets compared to unisensory targets. Importantly, these redundant-target effects can theoretically be explained by several different mental architectures, which are explored in this paper. These include: independent-race models, parallel interactive models, and coactive models. Earlier results, especially redundant-target processing times which are faster than predicted by the race-model inequality (Miller, 1982), implicated coactivation as a necessary explanation of redundant-target processing. However, this explanation has been recently challenged by demonstrating that violations of the race-model inequality can be explained by violations of the context invariance assumption underlying the race-model inequality (Otto & Mamassian, 2012). We utilized Systems Factorial Technology (Townsend & Nozawa, 1995), regarded as a standard diagnostic tool for inferences about mental architecture, to study redundant-target audiovisual processing. Three experiments were carried out in: a discrimination task (Experiment 1), a simultaneous perceptual matching task (Experiment 2), and a delayed matching task (Experiment 3). The results provide a key set of benchmarks to which we apply several simulations that are consistent with the context invariance explanation not only of the race-model inequality but also of capacity and architecture.
ASJC Scopus subject areas