Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.
Continuous Click-Evoked OAEs
How far down does the top down control of speech processing go?
Heivet Hernandez-Perez 1,2, J...
Nächste SlideShare
Wird geladen in …5
×

How far down does the top down control of speech processing go? - HEARing CRC PhD presentation

There is evidence that this efferent control may play a role in extracting signals from noise and the detection of target sounds. Here we evaluated the role of attention on the auditory efferent control of the brainstem (using auditory brainstem responses; ABRs) and the cochlear gain (using otoacoustic emissions; OAEs) during passive listening and a task of variable difficulty. We hypothesized that both the ABRs and OAEs would be modulated by attention and that the degree of suppression of OAEs (relative to the passive condition) would increase with task difficulty.

  • Loggen Sie sich ein, um Kommentare anzuzeigen.

  • Gehören Sie zu den Ersten, denen das gefällt!

How far down does the top down control of speech processing go? - HEARing CRC PhD presentation

  1. 1. Continuous Click-Evoked OAEs How far down does the top down control of speech processing go? Heivet Hernandez-Perez 1,2, Jessica Monaghan 1, Sumitrajit Dhar 3, Sriram Boothalingam 3, David Poeppel 4, Catherine McMahon 1,2 1 Macquarie University, Department of Linguistics, 2 The HEARing Cooperative Research Centre, 3 Northwestern University, Department of Communication Sciences and Disorders, 3 New York University, Department of Psychology. Introduction Methods Behavioral Results. creating sound value www.hearingcrc.org Table 1. Lexical decision tasks d’ values 2. Passive listening: Relax and watch a movie Experimental conditions 1. Active Listening: Press a button when hearing a non-word The perception of speech sounds in humans requires a fast and accurate integration of both bottom-up and top-down information carried by the auditory afferent and efferent pathways respectively (Figure on the right). However, it remains poorly understood whether, during speech perception, the top-down control reaches the level of the brainstem or even the cochlea via the auditory efferent pathway. Participants: Healthy adults (20-35 y.o.), native Australian- English speakers 200 words/non words per block 30 sec Baseline (silence) Attended segment “real word”/“non-word” 0.5 sec Baseline (silence) 30 sec Lexical decision 2 sec Ear 1 Ear 2 Electrophysiological Results. ERPs P1 N1 P2 Block 1: Natural spoken words/non words Block 2 :16 channels vocoded words/non words (Voc16) Block 3 :12 channels vocoded words/non words (Voc12) Block 4 : 8 channels vocoded words/non words (Voc8) • 4 blocks per experimental condition • Randomized presentation of blocks Words/non-words onset-ERPs components. • Lexical decision difficulty increases as the number of speech vocoded channels decreases. • P1, N1 and P2 differ between blocks during active listening (p<0.05) Electrophysiological Results. ABRs Wave V, Auditory Brainstem Responses There is evidence that this efferent control may play a role in extracting signals from noise and the detection of target sounds. Here we evaluated the role of attention on the auditory efferent control of the brainstem (using auditory brainstem responses; ABRs) and the cochlear gain (using otoacoustic emissions; OAEs) during passive listening and a task of variable difficulty. We hypothesized that both the ABRs and OAEs would be modulated by attention and that the degree of suppression of OAEs (relative to the passive condition) would increase with task difficulty. • Attentional effect: active condition showed a decrease in amplitude and a significant shift in phase. Active listening uV t (msec) Passive listening Trends: • Wave V latency shorter in the active condition • Wave V amplitude smaller in the passive condition Click-Evoked OAEs Results. Conclusions • The experimental protocol and set up is suitable for exploring attention and task difficulty effects at cortical, brainstem and inner ear levels. • Our data suggest that the accurate classification of speech sounds engage not only bottom-up processing but also top-down control mechanisms. Passive Condition Active Condition Click-Evoked OAEs effect Interval of confidence Natural spoken block 8 channels vocoded block • Task difficulty effect: Voc 8 block showed a decrease in amplitude and a significant shift in phase. Natural Voc16 Voc12 Voc 8

×