As people age, their ability to recognize emotions from facial expressions or voices tends to decline. However, some studies have found that older adults benefit more from combined audio-visual presentations than younger adults, resulting in similar levels of emotion recognition. One limitation of these studies is that they used highly selected emotional expressions to be well categorised. Such stimuli may not be typical of real-life situations. To address this, our study examined if the audio-visual emotion recognition benefit extends to auditory and visual stimuli that were not so well categorised.
The findings indicate that the auditory and visual complexity of a listening environment may impose an attentional constraint on the amount of visual speech benefit available to OAs and could help explain why seeing a talker does not always facilitate speech perception in noise.
We describe how a masked speech translation priming experiment can be readily created; with this in hand, the issue to address is what we might expect - will masked speech translation priming produce a different pattern of results to its visual counterpart?.
Evidence from younger adults that engaging working memory reduces distraction; we found that older adults were able to engage working memory to reduce the processing of task-irrelevant sounds
The findings support the view that the AB limits the entry of information into consciousness via a late-stage modal bottleneck, and suggest an ongoing compensatory response at early latencies.
The current study extends the study by Hazan et al. (2018b) by investigating the perception of the same speech materials for OA listeners with and without mild presbycusis.
Our study proposes a test of a key assumption of the most prominent model of consciousness – the global workspace (GWS) model (e.g., Baars, 2002, 2005, 2007; Dehaene & Naccache, 2001; Mudrik, Faivre, & Koch, 2014).