Source Themes

Older adults emotion recognition No auditory-visual benefit for less clear expressions

As people age, their ability to recognize emotions from facial expressions or voices tends to decline. However, some studies have found that older adults benefit more from combined audio-visual presentations than younger adults, resulting in similar levels of emotion recognition. One limitation of these studies is that they used highly selected emotional expressions to be well categorised. Such stimuli may not be typical of real-life situations. To address this, our study examined if the audio-visual emotion recognition benefit extends to auditory and visual stimuli that were not so well categorised.

Effects of Age and Uncertainty on the Visual Speech Benefit in Noise

The findings indicate that the auditory and visual complexity of a listening environment may impose an attentional constraint on the amount of visual speech benefit available to OAs and could help explain why seeing a talker does not always facilitate speech perception in noise.

Effect of sustained selective attention on steady-state visual evoked potentials

The results are consistent with the proposal that neural populations underlying first, and second harmonics have distinct functional roles.

Bilingual lexical representation: we have some words to say

We describe how a masked speech translation priming experiment can be readily created; with this in hand, the issue to address is what we might expect - will masked speech translation priming produce a different pattern of results to its visual counterpart?.

Does working memory protect against auditory distraction in older adults?

Evidence from younger adults that engaging working memory reduces distraction; we found that older adults were able to engage working memory to reduce the processing of task-irrelevant sounds

Time course of the unmasked attentional blink

The findings support the view that the AB limits the entry of information into consciousness via a late-stage modal bottleneck, and suggest an ongoing compensatory response at early latencies.

The influence of pacer-movement continuity and pattern matching on auditory-motor synchronisation

Our findings confirm that sensorimotor synchronisation is modulated by complex relations between pacer and movement properties.

Intelligibility of conversational and clear speech in young and older talkers as perceived by young and older listeners

The current study extends the study by Hazan et al. (2018b) by investigating the perception of the same speech materials for OA listeners with and without mild presbycusis.

Disgust expressive speech: The acoustic consequences of the facial expression of emotion

Our results indicate that the facial expression of emotions may have a role in shaping the acoustic properties of the vocal expressions of emotions.

Auditory-visual integration during nonconscious perception

Our study proposes a test of a key assumption of the most prominent model of consciousness – the global workspace (GWS) model (e.g., Baars, 2002, 2005, 2007; Dehaene & Naccache, 2001; Mudrik, Faivre, & Koch, 2014).