Understanding the Effects of Listening Effort on Sentence Processing and Memory in Sensorineural Hearing Loss: Evidence From Simultaneous Electrophysiology and Pupillometry (Study 1)

  • participants needed
  • sponsor
    University of Utah
Updated on 30 March 2023
Accepts healthy volunteers


Sensorineural hearing loss (SNHL) is among the most prevalent chronic conditions in aging and has a profoundly negative effect on speech comprehension, leading to increased social isolation, reduced quality of life, and increased risk for the development of dementia in older adulthood. Typical audiological tests and interventions, which focus on measuring and restoring audibility, do not explain the full range of cognitive difficulties that adults with hearing loss experience in speech comprehension. For example, adults with SNHL have to work disproportionally harder to decode acoustically degraded speech. That additional effort is thought to diminish shared executive and attentional resources for higher-level language processes, impacting subsequent comprehension and memory, even when speech is completely intelligible. This phenomenon has been referred to as listening effort (LE). There is a growing understanding that these cognitive factors are a critical and often "hidden effect" of hearing loss. At the same time, the effects of LE on the neural mechanisms of language processing and memory in SNHL are currently not well understood. In order to develop evidence-based assessments and interventions to improve comprehension and memory in SNHL, it is critical that we elucidate the cognitive and neural mechanisms of LE and its consequences for speech comprehension. In this project, we adopt a multi-method approach, combining methods from clinical audiology, psycholinguistics, and cognitive neuroscience to address this gap of knowledge. Specifically, we adopt a novel and innovative method of co-registering pupillometry (a reliable physiological measure of LE) and language-related event-related brain potential (ERP) measures during real-time speech processing to characterize the effects of acoustic challenge and LE on high-level language processes (e.g., semantic retrieval, syntactic integration) and subsequent speech memory in older adults with SNHL. This innovative work addresses a time-sensitive gap in the literature regarding the identification of objective and reliable markers of specific neurocognitive processes impacted by acoustic challenge and LE in age-related SNHL.


4.1.a. Detailed Description Note: the full application consists of two experiments. The information below pertains to Experiment 1 (Specific Aim 1,2). See Protocol Synopsis Experiment 1for Information on Experiment 2.

This experiment is a BESH (Basic Experimental Study with Humans) Trial. All participants are exposed to all experimental conditions (i.e., "interventions") in a complete factorial 2 x 3 within-subjects experimental design. Participants will consist of 80 older adults between the ages of 60 - 90 who will be recruited from the Salt Lake metro community. Participants will be recruited through the Utah Senior Ears database, the Utah Center on Aging, through flyers placed throughout the community (e.g., bulletin boards, waiting area in the ENT clinic at the University of Utah), through on-line advertisements (e.g., Facebook) and via word-of-mouth. Participants will be compensated with financial incentives. We will recruit for an approximately equal number of participants into two hearing groups based on their pure-tone average (PTA) thresholds: normal hearing (PTA of < 25 dBHL, 1-4kHz) and clinically-relevant hearing loss (PTA of > 25 dBHL 1-4kHz). While we adopt this dichotomous grouping for recruitment, we will treat hearing level as continuous to increase statistical power.

Inclusion and exclusion criteria are as follows:

Inclusion Criteria: Age 60-90; right-handed; native English speaker; scores in the normal range (< 25 points) on Montreal Cognitive Assessment (MoCA); for adults with hearing loss, a pure-tone average score of > 25 dB HL (between 1 - 4kHz) Exclusion Criteria: left-handed (language-related electrophysiological responses of left-handed subjects differ from those of right-handed subjects); history of psychiatric or neurological illnesses (including skull fractures, as this is known to alter electrophysiological response at the scalp); score of > 25 points on the MOCA; use of certain prescription and non-prescription drugs known to alter brain function and the autonomic nervous system, including pupil dilation (e.g., anti-depressants, ADHD drugs); any eye disease that would impair the ability to measure pupil dilation (e.g., cataracts, nystagmus, amblyopia); scores on speech shadowing audibility control task below 50%, suggesting poor intelligibility; a display of behavior that would significantly interfere with the validity of data collection or safety during the study;

We will follow all APA guidelines with respect to the treatment of human subjects. All participants will provide informed consent after study procedures are explained to them and the voluntary nature of participation will be emphasized. No identifying information (e.g., names) will be obtained from participants and the only information connected to their data files will be a unique, arbitrary code.

Study procedures will be conducted within a single session, lasting between 3-4 hours. Following informed consent, participants will complete a standardized hearing assessment, neuropsychological assessment, audibility control assessment, and then participant in the EEG/pupillometry experiment, each of which is described below. All data will be stored on password-protected computers in the PI's laboratory. All material from participants will be collected specifically for research purposes. The only identifying information that is collected about subjects is their names, which will be used to recruit and compensate participants, but will not be linked to their data in any way. The materials presented in these studies have no known potential to stress, embarrass, stigmatize, or incriminate experimental participants.

Prior to the beginning of the experiment, we will conduct two audiometric tests via an MA-41 audiometer via RadioEar IP-30 insert air-conduction earphones. First, pure-tone thresholds will be measured using the modified Hughson- Westlake at octave frequencies from 250 to 8000 Hz for each ear. Second, we will test speech recognition thresholds (SRTs) using a recorded Central Institute for the Deaf (CID) W-1 spondee word list Near visual acuity was also tested for both the right and left eye using the Rosenbaum visual acuity test.

Participants will then complete a brief battery of cognitive assessments. (a) the Montreal Cognitive Assessment (MoCA). Although the appropriate cut-off score for cognitive impairment in the MoCA is variable across samples, individuals scoring below 20 are generally considered to be at increased risk for MCI. Therefore, we use this as our conservative cut-off. Participants will also complete the F-A-S phonemic fluency task as a measure of verbal fluency, via the short- form computerized version of the reading span task as a measure of verbal working memory, and the extended range vocabulary tests from the ETS Kit of Factor Referenced Cognitive Tests, as a measure of verbal ability. Participants will then complete an audibility control task. For this task, the same native speaker of American English will be used to record the stimuli at the same +3dB. Participants will hear nine different test sentences (e.g., "Don't touch the wet paint) and will be tasked with "shadowing" each sentence by repeating out loud each word as it was heard. The immediate repetition is done to reduce the contribution of any memory components to task performance.

Following the audiological and neuropsychological testing, we will begin the primary experiment. Electrophysiological data (EEG), pupil dilation measurements (pupillometry), and behavioral recordings will be made from participants. EEG will be recorded from 64 silver-silver chloride electrodes embedded in an EasyCap (Electro-Cap, Inc), following a 10-20 montage. In addition, an electrode will placed on the left infraorbital ridge to monitor for vertical eye movements and blinks, and a virtual bipolar horizontal electrooculogram channel will be created offline for monitoring horizontal eye movements by calculating a difference between two fronto-temporal electrodes FT10-FT9 that sit posterior to the outer canthus of each eye. The continuous EEG will be amplified with a BrainAmp DC amplifier (Brain-Vision, LLC, Morrisville, NC) (bandwidth filtered: 0.02-250 Hz) and recorded to hard disk at a sampling rate of 1000 Hz. Electrode impedances will be kept below 5 kOhms. During the listening task, pupil size measurements will be continuously recorded from the right eye using an Eyelink 1000 Plus desktop mounted infrared eye tracker camera distributed by SR Research (SR Research Ltd., Ottawa, ON, Canada). Continuous pupil size measurements will be recorded at a rate of 1000 Hz using Eyelink software and will be downsampled offline to 50 Hz. See Statistical Analysis and Power for information on pupil data processing and cleaning.

Participants will be tested in a quiet sound-attenuated testing room. Participants will be seated 85 cm from a 24-in high- performance LCD monitor that will be used to present instructions and cues to the participant (e.g., when to take a break). Speech stimuli will be presented through the sound card of the stimulus presentation computer and routed to a MAICO MA-41 audiometer via the auxiliary channel, allowing for direct control of stimulus intensity. The audiometer will route the speech to the participant via RadioEar IP-30 insert air-conduction earphones. Stimuli will be presented presented at 65 dB HL for all participants to their better hearing ear (based on PTA thresholds between 1-4 kHz). All hardware is calibrated to standard in our testing room by a NASED certified technician.

The experiment will be programmed in Python via the PsychoPy open source platform. There will be 360 trials. On each trial, participants will listen to a sentence that contains a single target word that will either be a normal plausible continuation, a semantic violation, or a syntactic violation (see Table 1 in Research Plan). In addition, sentences will be presented either in +3dB SNR stationary speech-shaped background noise or will be presented in quiet (no background noise), in a 2 x 3 within-subjects factorial design.

After the end of the 360 sentences, a delayed recognition memory task will be administered. Participants will be visually presented with 360 test sentence frames on a tablet computer, each with the target word missing. They will be instructed to mark whether or not they recognized each sentence as one that they had heard during the experimental task. For the sentences they reported as having heard previously, they will be asked to recall the target word to the best of their ability by typing their response. There will be no time limit on the memory test. 180 of the sentences will be old items that they had heard during the task and the other 180 will be semantic foils. The 180 sentences that were ones heard previously will be taken evenly from each of the six experimental conditions, such that there will be 30 sentences from each condition. Each semantic foil will be created by taking 2 to 4 of the meaning-bearing words from a sentence that the participant had actually heard and these will be used to create a new semantically similar sentence. For example, if the participant had heard the sentence "Dan recognized John even though he had grown a beard since the last time they saw each other.", a semantic foil may be: "No one at the reunion recognized Dan because he had grown a _____ since the last time everyone met." The inclusion of foils are used to make the recognition task more challenging in order to reduce the likelihood of ceiling performance. This approach has been validated in past studies on listening effort and recognition memory. Although the primary memory analyses concern sentence recognition memory, we will also report performance on the cued word recall task as well, as we have done in prior work.

Upon completion of the memory assessment, the study procedures will be complete. Participants will be debriefed, receive compensation, and will have the opportunity to ask any questions of the study team.

Condition Sentence Stimulus, Auditory Noise
Treatment Sentence Stimulus, Auditory Noise
Clinical Study IdentifierNCT05584514
SponsorUniversity of Utah
Last Modified on30 March 2023

Similar trials to consider


Browse trials for

Not finding what you're looking for?

Every year hundreds of thousands of volunteers step forward to participate in research. Sign up as a volunteer and receive email notifications when clinical trials are posted in the medical category of interest to you.

Sign up as volunteer

user name

Added by • 



Reply by • Private

Lorem ipsum dolor sit amet consectetur, adipisicing elit. Ipsa vel nobis alias. Quae eveniet velit voluptate quo doloribus maxime et dicta in sequi, corporis quod. Ea, dolor eius? Dolore, vel!

  The passcode will expire in None.

No annotations made yet

Add a private note
  • abc Select a piece of text from the left.
  • Add notes visible only to you.
  • Send it to people through a passcode protected link.
Add a private note