Ear and Hearing

Ear and Hearing

Clinical Effectiveness of an At-Home Auditory Training Program: A Randomized Controlled Trial

01-09-2019 – Humes, Larry E.; Skinner, Kimberly G.; Kinney, Dana L.; Rogers, Sara E.; Main, Anna K.; Quigley, Tera M.

Journal Article

Objectives: To investigate the effectiveness of an at-home frequent-word auditory training procedure for use with older adults with impaired hearing wearing their own hearing aids.
Design: Prospective, double-blind placebo-controlled randomized trial with three parallel branches: an intervention group who received the at-home auditory training; an active control group who listened to audiobooks using a similar platform at home (placebo intervention); and a passive control group who wore hearing aids and returned for outcomes, but received no intervention. Outcome measures were obtained after a 5-week period. A mixed research design was used with a between-subjects factor of group and a repeated-measures factor of time (pre- and post-treatment) to evaluate the effects of the at-home auditory training program. The intervention was completed in participants’ own homes. Baseline and outcomes measures were assessed at a university research laboratory. The participants were adults, aged 54 to 80 years, with the mild-to-moderate hearing loss. Of the 51 identified eligible participants, 45 enrolled as a volunteer sample and 43 of these completed the study. Frequent-word auditory training regimen completed intervention at home over a period of 5 weeks. The active control group listened to audiobooks (placebo intervention) and the passive control group completed no intervention. The primary outcome measure is a Connected Speech test benefit. The secondary outcome measure is a 66-item self-report profile of hearing aid performance.
Results: Participants who received the at-home training intervention demonstrated significant improvements on aided recognition for trained materials, but no generalization of these benefits to nontrained materials was seen. This was despite reasonably good compliance with the at-home training regimen and careful verification of hearing aid function throughout the trial. Based on follow-up post-trial evaluation, the benefits observed for trained materials in the intervention group were sustained for a period of at least 8.5 months. No improvement was seen for supplemental outcome measures of hearing aid satisfaction, hearing handicap, or tolerance of background noise while listening to speech.
Conclusions: The at-home auditory training procedure utilizing frequently occurring words was effective for the trained materials used in the procedure. No generalization was seen to nontrained materials or to perceived benefit from hearing aids.

Correlates of Hearing Aid Use in UK Adults: Self-Reported Hearing Difficulties, Social Participation, Living Situation, Health, and Demographics

01-09-2019 – Sawyer, Chelsea S.; Armitage, Christopher J.; Munro, Kevin J.; Singh, Gurjit; Dawes, Piers D.

Journal Article

Objectives: Hearing impairment is ranked fifth globally for years lived with disability, yet hearing aid use is low among individuals with a hearing impairment. Identifying correlates of hearing aid use would be helpful in developing interventions to promote use. To date, however, no studies have investigated a wide range of variables, this has limited intervention development. The aim of the present study was to identify correlates of hearing aid use in adults in the United Kingdom with a hearing impairment. To address limitations in previous studies, we used a cross-sectional analysis to model a wide range of potential correlates simultaneously to provide better evidence to aid intervention development.
Design: The research was conducted using the UK Biobank Resource. A cross-sectional analysis of hearing aid use was conducted on 18,730 participants aged 40 to 69 years old with poor hearing, based on performance on the Digit Triplet test.
Results: Nine percent of adults with poor hearing in the cross-sectional sample reported using a hearing aid. The strongest correlate of hearing aid use was self-reported hearing difficulties (odds ratio OR = 110.69 95% confidence interval {CI} = 65.12 to 188.16). Individuals who were older were more likely to use a hearing aid: for each additional year of age, individuals were 5% more likely to use a hearing aid (95% CI = 1.04 to 1.06). People with tinnitus (OR = 1.43 95% CI = 1.26 to 1.63) and people with a chronic illness (OR = 1.97 95% CI = 1.71 to 2.28) were more likely to use a hearing aid. Those who reported an ethnic minority background (OR = 0.53 95% CI = 0.39 to 0.72) and those who lived alone (OR = 0.80 95% CI = 0.68 to 0.94) were less likely to use a hearing aid.
Conclusions: Interventions to promote hearing aid use need to focus on addressing reasons for the perception of hearing difficulties and how to promote hearing aid use. Interventions to promote hearing aid use may need to target demographic groups that are particularly unlikely to use hearing aids, including younger adults, those who live alone and those from ethnic minority backgrounds.

Effects of Age and Hearing Loss on the Recognition of Emotions in Speech

01-09-2019 – Christensen, Julie A.; Sis, Jenni; Kulkarni, Aditya M.; Chatterjee, Monita

Journal Article

Objectives: Emotional communication is a cornerstone of social cognition and informs human interaction. Previous studies have shown deficits in facial and vocal emotion recognition in older adults, particularly for negative emotions. However, few studies have examined combined effects of aging and hearing loss on vocal emotion recognition by adults. The objective of this study was to compare vocal emotion recognition in adults with hearing loss relative to age-matched peers with normal hearing. We hypothesized that age would play a role in emotion recognition and that listeners with hearing loss would show deficits across the age range.
Design: Thirty-two adults (22 to 74 years of age) with mild to severe, symmetrical sensorineural hearing loss, amplified with bilateral hearing aids and 30 adults (21 to 75 years of age) with normal hearing, participated in the study. Stimuli consisted of sentences spoken by 2 talkers, 1 male, 1 female, in 5 emotions (angry, happy, neutral, sad, and scared) in an adult-directed manner. The task involved a single-interval, five-alternative forced-choice paradigm, in which the participants listened to individual sentences and indicated which of the five emotions was targeted in each sentence. Reaction time was recorded as an indirect measure of cognitive load.
Results: Results showed significant effects of age. Older listeners had reduced accuracy, increased reaction times, and reduced d’ values. Normal hearing listeners showed an Age by Talker interaction where older listeners had more difficulty identifying male vocal emotion. Listeners with hearing loss showed reduced accuracy, increased reaction times, and lower d’ values compared with age-matched normal-hearing listeners. Within the group with hearing loss, age and talker effects were significant, and low-frequency pure-tone averages showed a marginally significant effect. Contrary to other studies, once hearing thresholds were taken into account, no effects of listener sex were observed, nor were there effects of individual emotions on accuracy. However, reaction times and d’ values showed significant differences between individual emotions.
Conclusions: The results of this study confirm existing findings in the literature showing that older adults show significant deficits in voice emotion recognition compared with their normally hearing peers, and that among listeners with normal hearing, age-related changes in hearing do not predict this age-related deficit. The present results also add to the literature by showing that hearing impairment contributes additionally to deficits in vocal emotion recognition, separate from deficits related to age. These effects of age and hearing loss appear to be quite robust, being evident in reduced accuracy scores and d’ measures, as well as in reaction time measures.

Measures of Listening Effort Are Multidimensional

01-09-2019 – Alhanbali, Sara; Dawes, Piers; Millman, Rebecca E.; Munro, Kevin J.

Journal Article

Objectives: Listening effort can be defined as the cognitive resources required to perform a listening task. The literature on listening effort is as confusing as it is voluminous: measures of listening effort rarely correlate with each other and sometimes result in contradictory findings. Here, we directly compared simultaneously recorded multimodal measures of listening effort. After establishing the reliability of the measures, we investigated validity by quantifying correlations between measures and then grouping-related measures through factor analysis.
Design: One hundred and sixteen participants with audiometric thresholds ranging from normal to severe hearing loss took part in the study (age range: 55 to 85 years old, 50.3% male). We simultaneously measured pupil size, electroencephalographic alpha power, skin conductance, and self-report listening effort. One self-report measure of fatigue was also included. The signal to noise ratio (SNR) was adjusted at 71% criterion performance using sequences of 3 digits. The main listening task involved correct recall of a random digit from a sequence of six presented at a SNR where performance was around 82 to 93%. Test–retest reliability of the measures was established by retesting 30 participants 7 days after the initial session.
Results: With the exception of skin conductance and the self-report measure of fatigue, interclass correlation coefficients (ICC) revealed good test–retest reliability (minimum ICC: 0.71). Weak or nonsignificant correlations were identified between measures. Factor analysis, using only the reliable measures, revealed four underlying dimensions: factor 1 included SNR, hearing level, baseline alpha power, and performance accuracy; factor 2 included pupillometry; factor 3 included alpha power (during speech presentation and during retention); factor 4 included self-reported listening effort and baseline alpha power.
Conclusions: The good ICC suggests that poor test reliability is not the reason for the lack of correlation between measures. We have demonstrated that measures traditionally used as indicators of listening effort tap into multiple underlying dimensions. We therefore propose that there is no “gold standard” measure of listening effort and that different measures of listening effort should not be used interchangeably. When choosing method(s) to measure listening effort, the nature of the task and aspects of increased listening demands that are of interest should be taken into account. The findings of this study provide a framework for understanding and interpreting listening effort measures.

Effects of Reverberation on the Relation Between Compression Speed and Working Memory for Speech-in-Noise Perception

01-09-2019 – Reinhart, Paul; Zahorik, Pavel; Souza, Pamela

Journal Article

Objectives: Previous study has suggested that when listening in modulated noise, individuals benefit from different wide dynamic range compression (WDRC) speeds depending on their working memory ability. Reverberation reduces the modulation depth of signals and may impact the relation between WDRC speed and working memory. The purpose of this study was to examine this relation across a range of reverberant conditions.
Design: Twenty-eight older listeners with mild-to-moderate sensorineural hearing impairment were recruited in the present study. Individual working memory was measured using a Reading Span test. Sentences were combined with noise at two signal to noise ratios (2 and 5 d
B SNR), and reverberation was simulated at a range of reverberation times (0.00, 0.75, 1.50, and 3.00 sec). Speech intelligibility was measured in listeners when listening to the sentences processed with simulated fast-acting and slow-acting WDRC conditions.
Results: There was a significant relation between WDRC speed and working memory with minimal or no reverberation. Consistent with previous research, this relation was such that individuals with high working memory had higher speech intelligibility with fast-acting WDRC, and individuals with low working memory performed better with slow-acting WDRC. However, at longer reverberation times, there was no relation between WDRC speed and working memory.
Conclusions: Consistent with previous studies, results suggest that there is an advantage of tailoring WDRC speed based on an individual’s working memory under anechoic conditions. However, the present results further suggest that there may not be such a benefit in reverberant listening environments due to reduction in signal modulation.

Auditory Evoked Responses in Older Adults With Normal Hearing, Untreated, and Treated Age-Related Hearing Loss

01-09-2019 – McClannahan, Katrina S.; Backer, Kristina C.; Tremblay, Kelly L.

Journal Article

Objectives: The goal of this study was to identify the effects of auditory deprivation (age-related hearing loss) and auditory stimulation (history of hearing aid use) on the neural registration of sound across two stimulus presentation conditions: (1) equal sound pressure level and (2) equal sensation level.
Design: We used a between-groups design, involving three groups of 14 older adults (n = 42; 62 to 84 years): (1) clinically defined normal hearing (≤25 d
B from 250 to 8000 Hz, bilaterally), (2) bilateral mild–moderate/moderately severe sensorineural hearing loss who have never used hearing aids, and (3) bilateral mild–moderate/moderately severe sensorineural hearing loss who have worn bilateral hearing aids for at least the past 2 years.
Results: There were significant delays in the auditory P1-N1-P2 complex in older adults with hearing loss compared with their normal hearing peers when using equal sound pressure levels for all participants. However, when the degree and configuration of hearing loss were accounted for through the presentation of equal sensation level stimuli, no latency delays were observed. These results suggest that stimulus audibility modulates P1-N1-P2 morphology and should be controlled for when defining deprivation and stimulus-related neuroplasticity in people with hearing loss. Moreover, a history of auditory stimulation, in the form of hearing aid use, does not appreciably alter the neural registration of unaided auditory evoked brain activity when quantified by the P1-N1-P2.
Conclusions: When comparing auditory cortical responses in older adults with and without hearing loss, stimulus audibility, and not hearing loss–related neurophysiological changes, results in delayed response latency for those with age-related hearing loss. Future studies should carefully consider stimulus presentation levels when drawing conclusions about deprivation- and stimulation-related neuroplasticity. Additionally, auditory stimulation, in the form of a history of hearing aid use, does not significantly affect the neural registration of sound when quantified using the P1-N1-P2–evoked response.

Masked Sentence Recognition in Children, Young Adults, and Older Adults: Age-Dependent Effects of Semantic Context and Masker Type

01-09-2019 – Buss, Emily; Hodge, Sarah E.; Calandruccio, Lauren; Leibold, Lori J.; Grose, John H.

Journal Article

Objectives: Masked speech recognition in normal-hearing listeners depends in part on masker type and semantic context of the target. Children and older adults are more susceptible to masking than young adults, particularly when the masker is speech. Semantic context has been shown to facilitate noise-masked sentence recognition in all age groups, but it is not known whether age affects a listener’s ability to use context with a speech masker. The purpose of the present study was to evaluate the effect of masker type and semantic context of the target as a function of listener age.
Design: Listeners were children (5 to 16 years), young adults (19 to 30 years), and older adults (67 to 81 years), all with normal or near-normal hearing. Maskers were either speech-shaped noise or two-talker speech, and targets were either semantically correct (high context) sentences or semantically anomalous (low context) sentences.
Results: As predicted, speech reception thresholds were lower for young adults than either children or older adults. Age effects were larger for the two-talker masker than the speech-shaped noise masker, and the effect of masker type was larger in children than older adults. Performance tended to be better for targets with high than low semantic context, but this benefit depended on age group and masker type. In contrast to adults, children benefitted less from context in the two-talker speech masker than the speech-shaped noise masker. Context effects were small compared with differences across age and masker type.
Conclusions: Different effects of masker type and target context are observed at different points across the lifespan. While the two-talker masker is particularly challenging for children and older adults, the speech masker may limit the use of semantic context in children but not adults.

Effects of Phantom Electrode Stimulation on Vocal Production in Cochlear Implant Users

01-09-2019 – Caldwell, Meredith T.; Jiradejvong, Patpong; Limb, Charles J.

Journal Article

Objectives: Cochlear implant (CI) users suffer from a range of speech impairments, such as stuttering and vocal control of pitch and intensity. Though little research has focused on the role of auditory feedback in the speech of CI users, these speech impairments could be due in part to limited access to low-frequency cues inherent in CI-mediated listening. Phantom electrode stimulation (PES) represents a novel application of current steering that extends access to low frequencies for CI recipients. It is important to note that PES transmits frequencies below 300 Hz, whereas Baseline does not. The objective of this study was to explore the effects of PES on multiple frequency-related characteristics of voice production.
Design: Eight postlingually deafened, adult Advanced Bionics CI users underwent a series of vocal production tests including Tone Repetition, Vowel Sound Production, Passage Reading, and Picture Description. Participants completed all of these tests twice: once with PES and once using their program used for everyday listening (Baseline). An additional test, Automatic Modulation, was included to measure acute effects of PES and was completed only once. This test involved switching between PES and Baseline at specific time intervals in real time as participants read a series of short sentences. Finally, a subjective Vocal Effort measurement was also included.
Results: In Tone Repetition, the fundamental frequencies (F0) of tones produced using PES and the size of musical intervals produced using PES were significantly more accurate (closer to the target) compared with Baseline in specific gender, target tone range, and target tone type testing conditions. In the Vowel Sound Production task, vowel formant profiles produced using PES were closer to that of the general population compared with those produced using Baseline. The Passage Reading and Picture Description task results suggest that PES reduces measures of pitch variability (F0 standard deviation and range) in natural speech production. No significant results were found in comparisons of PES and Baseline in the Automatic Modulation task nor in the Vocal Effort task.
Conclusions: The findings of this study suggest that usage of PES increases accuracy of pitch matching in repeated sung tones and frequency intervals, possibly due to more accurate F0 representation. The results also suggest that PES partially normalizes the vowel formant profiles of select vowel sounds. PES seems to decrease pitch variability of natural speech and appears to have limited acute effects on natural speech production, though this finding may be due in part to paradigm limitations. On average, subjective ratings of vocal effort were unaffected by the usage of PES versus Baseline.

Hearing Impairment and Perceived Clarity of Predictable Speech

01-09-2019 – Signoret, Carine; Rudner, Mary

Journal Article

Objectives: The precision of stimulus-driven information is less critical for comprehension when accurate knowledge-based predictions of the upcoming stimulus can be generated. A recent study in listeners without hearing impairment (HI) has shown that form- and meaning-based predictability independently and cumulatively enhance perceived clarity of degraded speech. In the present study, we investigated whether form- and meaning-based predictability enhanced the perceptual clarity of degraded speech for individuals with moderate to severe sensorineural HI, a group for whom such enhancement may be particularly important.
Design: Spoken sentences with high or low semantic coherence were degraded by noise-vocoding and preceded by matching or nonmatching text primes. Matching text primes allowed generation of form-based predictions while semantic coherence allowed generation of meaning-based predictions.
Results: The results showed that both form- and meaning-based predictions make degraded speech seem clearer to individuals with HI. The benefit of form-based predictions was seen across levels of speech quality and was greater for individuals with HI in the present study than for individuals without HI in our previous study. However, for individuals with HI, the benefit of meaning-based predictions was only apparent when the speech was slightly degraded. When it was more severely degraded, the benefit of meaning-based predictions was only seen when matching text primes preceded the degraded speech. The benefit in terms of perceptual clarity of meaning-based predictions was positively related to verbal fluency but not working memory performance.
Conclusions: Taken together, these results demonstrate that, for individuals with HI, form-based predictability has a robust effect on perceptual clarity that is greater than the effect previously shown for individuals without HI. However, when speech quality is moderately or severely degraded, meaning-based predictability is contingent on form-based predictability. Further, the ability to mobilize the lexicon seems to contribute to the strength of meaning-based predictions. Whereas individuals without HI may be able to devote explicit working memory capacity for storing meaning-based predictions, individuals with HI may already be using all available explicit capacity to process the degraded speech and thus become reliant on explicit skills such as their verbal fluency to generate useful meaning-based predictions.

High-Variability Sentence Recognition in Long-Term Cochlear Implant Users: Associations With Rapid Phonological Coding and Executive Functioning

01-09-2019 – Smith, Gretchen N. L.; Pisoni, David B.; Kronenberger, William G.

Journal Article

Objectives: The objective of the present study was to determine whether long-term cochlear implant (CI) users would show greater variability in rapid phonological coding skills and greater reliance on slow-effortful compensatory executive functioning (EF) skills than normal-hearing (NH) peers on perceptually challenging high-variability sentence recognition tasks. We tested the following three hypotheses: First, CI users would show lower scores on sentence recognition tests involving high speaker and dialect variability than NH controls, even after adjusting for poorer sentence recognition performance by CI users on a conventional low-variability sentence recognition test. Second, variability in fast-automatic rapid phonological coding skills would be more strongly associated with performance on high-variability sentence recognition tasks for CI users than NH peers. Third, compensatory EF strategies would be more strongly associated with performance on high-variability sentence recognition tasks for CI users than NH peers.
Design: Two groups of children, adolescents, and young adults aged 9 to 29 years participated in this cross-sectional study: 49 long-term CI users (≥7 years) and 56 NH controls. All participants were tested on measures of rapid phonological coding (Children’s Test of Nonword Repetition), conventional sentence recognition (Harvard Sentence Recognition Test), and two novel high-variability sentence recognition tests that varied the indexical attributes of speech (Perceptually Robust English Sentence Test Open-set test and Perceptually Robust English Sentence Test Open-set test-Foreign Accented English test). Measures of EF included verbal working memory (WM), spatial WM, controlled cognitive fluency, and inhibition concentration.
Results: CI users scored lower than NH peers on both tests of high-variability sentence recognition even after conventional sentence recognition skills were statistically controlled. Correlations between rapid phonological coding and high-variability sentence recognition scores were stronger for the CI sample than for the NH sample even after basic sentence perception skills were statistically controlled. Scatterplots revealed different ranges and slopes for the relationship between rapid phonological coding skills and high-variability sentence recognition performance in CI users and NH peers. Although no statistically significant correlations between EF strategies and sentence recognition were found in the CI or NH sample after use of a conservative Bonferroni-type correction, medium to high effect sizes for correlations between verbal WM and sentence recognition in the CI sample suggest that further investigation of this relationship is needed.
Conclusions: These findings provide converging support for neurocognitive models that propose two channels for speech-language processing: a fast-automatic channel that predominates whenever possible and a compensatory slow-effortful processing channel that is activated during perceptually-challenging speech processing tasks that are not fully managed by the fast-automatic channel (ease of language understanding, framework for understanding effortful listening, and auditory neurocognitive model). CI users showed significantly poorer performance on measures of high-variability sentence recognition than NH peers, even after simple sentence recognition was controlled. Nonword repetition scores showed almost no overlap between CI and NH samples, and correlations between nonword repetition scores and high-variability sentence recognition were consistent with greater reliance on engagement of fast-automatic phonological coding for high-variability sentence recognition in the CI sample than in the NH sample. Further investigation of the verbal WM–sentence recognition relationship in CI users is recommended. Assessment of fast-automatic phonological processing and slow-effortful EF skills may provide a better understanding of speech perception outcomes in CI users in the clinical setting.

Use of Direct-Connect for Remote Speech-Perception Testing in Cochlear Implants

01-09-2019 – Sevier, Joshua D.; Choi, Sangsook; Hughes, Michelle L.

Journal Article

Objectives: Previous research has demonstrated the feasibility of programming cochlear implants (CIs) via telepractice. To effectively use telepractice in a comprehensive manner, all components of a clinical CI visit should be validated using remote technology. Speech-perception testing is important for monitoring outcomes with a CI, but it has yet to be validated for remote service delivery. The objective of this study, therefore, was to evaluate the feasibility of using direct audio input (DAI) as an alternative to traditional sound-booth speech-perception testing for serving people with CIs via telepractice. Specifically, our goal was to determine whether there was a significant difference in speech-perception scores between the remote DAI (telepractice) and the traditional (in-person) sound-booth conditions.
Design: This study used a prospective, split-half-design to test speech perception in the remote DAI and in-person sound-booth conditions. Thirty-two adults and older children with CIs participated; all had a minimum of 6 months of experience with their device. Speech-perception tests included the consonant–nucleus–consonant (CNC) words, Hearing-in-Noise test (HINT) sentences, and Arizona Biomedical Institute at Arizona State University (Az
Bio) sentences. All three tests were administered at levels of 50 and 60 d
BA in quiet. Sentence stimuli were also presented in 4-talker babble at signal to noise ratios (SNRs) of +10 and +5 d
B for both the 50- and 60-d
BA presentation levels. A repeated-measures analysis of variance was used to assess the effects of location (remote, in person), stimulus level (50, 60 d
BA), and SNR (if applicable; quiet, +10, +5 d
B) on each outcome measure (CNC, HINT, Az
Results: The results showed no significant effect of location for any of the tests administered (p > 0.1). There was no significant effect of presentation level for CNC words or phonemes (p > 0.2). There was, however, a significant effect of level (p < 0.001) for both HINT and Az
Bio sentences, but the direction of the effect was opposite of what was expected—scores were poorer for 60 d
BA than for 50 d
BA. For both sentence tests, there was a significant effect of SNR, with poorer performance for worsening SNRs, as expected.
Conclusions: The present study demonstrated that speech-perception testing via telepractice is feasible using DAI. There was no significant difference in scores between the remote and in-person conditions, which suggests that DAI testing can be used as a valid alternative to standard sound-booth testing. The primary limitation is that the calibration tools are presently not commercially available.

A Comparison of Electrical Stimulation Levels Across Ears for Children With Sequential Bilateral Cochlear Implants

01-09-2019 – Galvin, Karyn L.; Abdi, Roghayeh; Dowell, Richard C.; Nayagam, Bryony

Journal Article

Objectives: To compare threshold and comfortable levels between a first and second cochlear implant (CI) for children, and to consider if the degree of difference between CIs was related to the age at bilateral implantation or the time between implants. A secondary objective was to examine the changes in levels over time for each CI.
Design: Fifty-seven participants were selected from the 146 children and young adults who received a first Nucleus CI as a child, and received a second implant at the Royal Victorian Eye and Ear Hospital between September 2003 and December 2011. Exclusion criteria included an older implant type, incomplete array insertion, incomplete data available, and a pulse width higher than the default. Using measurements from clinical sessions, the threshold levels, comfortable levels, and dynamic range of electrical stimulation were compared at three electrode array regions and at the “initial” (first 10 weeks), 2-year, and 5-year postoperative time points. The T-ratio and C-ratio for each array region and each time point were calculated by dividing each mean (n = 3 electrodes) level for the second implant by that for the first implant.
Results: The T-ratio was generally not significantly different to one, indicating no differences in threshold levels between the second and first implants; however, threshold levels were lower for the second implant in the apical region at the initial time point, and there was a significant difference in threshold levels in the apical region for children with a Contour Advance array for the second implant and an older-style array (i.e., Contour) for the first implant. For each implant individually, there were no significant changes in threshold levels across time. The C-ratio was significantly <1 at all electrode array regions at all time points, indicating lower comfortable levels for the second implant. The difference between implants was greater for children with variable array type (i.e., a Contour Advance array for the second implant and an older-style Contour or Straight array for the first implant). There was a significant increase in the C-ratio between the initial and 2-year time points, driven by an increase in comfortable levels for the second implant over this time period. A longer time between implants was associated with a narrower dynamic range, due to lower comfortable levels, for the second implant.
Conclusions: For this sequentially implanted group, threshold levels were similar between implants, with some differences in cases with a newer array type for the second implant. Comfortable levels were lower for the second implant; although this difference decreased between the initial and 2-year postoperative time points, it was still evident at 5 years postoperative. A longer time between implants was associated with a narrower dynamic range. These findings are likely to apply to children using other brands of implant. Knowing what to expect in terms of programming children with a second implant will help clinicians to recognize and respond to unexpected outcomes. The work raises important questions to be addressed in future research regarding the implications of the programming outcomes for actual listening performance.

Auditory Localization and Spatial Release From Masking in Children With Suspected Auditory Processing Disorder

01-09-2019 – Boothalingam, Sriram; Purcell, David W.; Allan, Chris; Allen, Prudence; Macpherson, Ewan

Journal Article

Objectives: We sought to investigate whether children referred to our audiology clinic with a complaint of listening difficulty, that is, suspected of auditory processing disorder (APD), have difficulties localizing sounds in noise and whether they have reduced benefit from spatial release from masking.
Design: Forty-seven typically hearing children in the age range of 7 to 17 years took part in the study. Twenty-one typically developing (TD) children served as controls, and the other 26 children, referred to our audiology clinic with listening problems, were the study group: suspected APD (s
APD). The ability to localize a speech target (the word “baseball”) was measured in quiet, broadband noise, and speech-babble in a hemi-anechoic chamber. Participants stood at the center of a loudspeaker array that delivered the target in a diffused noise-field created by presenting independent noise from four loudspeakers spaced 90° apart starting at 45°. In the noise conditions, the signal-to-noise ratio was varied between −12 and 0 d
B in 6-d
B steps by keeping the noise level constant at 66 d
B SPL and varying the target level. Localization ability was indexed by two metrics, one assessing variability in lateral plane lateral scatter (Lscat) and the other accuracy in the front/back dimension front/back percent correct (FBpc). Spatial release from masking (SRM) was measured using a modified version of the Hearing in Noise Test (HINT). In this HINT paradigm, speech targets were always presented from the loudspeaker at 0°, and a single noise source was presented either at 0°, 90°, or 270° at 65 d
B A. The SRM was calculated as the difference between the 50% correct HINT speech reception threshold obtained when both speech and noise were collocated at 0° and when the noise was presented at either 90° or 270°.
Results: As expected, in both groups, localization in noise improved as a function of signal-to-noise ratio. Broadband noise caused significantly larger disruption in FBpc than in Lscat when compared with speech babble. There were, however, no group effects or group interactions, suggesting that the children in the s
APD group did not differ significantly from TD children in either localization metric (Lscat and FBpc). While a significant SRM was observed in both groups, there were no group effects or group interactions. Collectively, the data suggest that children in the s
APD group did not differ significantly from the TD group for either binaural measure investigated in the study.
Conclusions: As is evident from a few poor performers, some children with listening difficulties may have difficulty in localizing sounds and may not benefit from spatial separation of speech and noise. However, the heterogeneity in APD and the variability in our data do not support the notion that localization is a global APD problem. Future studies that employ a case study design might provide more insights.

Neurophysiological Differences in Emotional Processing by Cochlear Implant Users, Extending Beyond the Realm of Speech

01-09-2019 – Deroche, Mickael L. D.; Felezeu, Mihaela; Paquette, Sébastien; Zeitouni, Anthony; Lehmann, Alexandre

Journal Article

Objective: Cochlear implants (CIs) restore a sense of hearing in deaf individuals. However, they do not transmit the acoustic signal with sufficient fidelity, leading to difficulties in recognizing emotions in voice and in music. The study aimed to explore the neurophysiological bases of these limitations.
Design: Twenty-two adults (18 to 70 years old) with CIs and 22 age-matched controls with normal hearing participated. Event-related potentials (ERPs) were recorded in response to emotional bursts (happy, sad, or neutral) produced in each modality (voice or music) that were for the most part correctly identified behaviorally.
Results: Compared to controls, the N1 and P2 components were attenuated and prolonged in CI users. To a smaller degree, N1 and P2 were also attenuated and prolonged in music compared to voice, in both populations. The N1–P2 complex was emotion-dependent (e.g., reduced and prolonged response to sadness), but this was also true in both populations. In contrast, the later portion of the response, between 600 and 850 ms, differentiated happy and sad from neutral stimuli in normal hearing but not in CI listeners.
Conclusions: The early portion of the ERP waveform reflected primarily the general reduction in sensory encoding by CI users (largely due to CI processing itself), whereas altered emotional processing (by CI users) could be found in the later portion of the ERP and extended beyond the realm of speech.

Different Associations between Auditory Function and Cognition Depending on Type of Auditory Function and Type of Cognition

01-09-2019 – Danielsson, Henrik; Humes, Larry E; Rönnberg, Jerker

Journal Article

Objectives: Previous studies strongly suggest that declines in auditory threshold can lead to impaired cognition. The aim of this study was to expand that picture by investigating how the relationships between age, auditory function, and cognitive function vary with the types of auditory and cognitive function considered.
Design: Three auditory constructs (threshold, temporal-order identification, and gap detection) were modeled to have an effect on four cognitive constructs (episodic long-term memory, semantic long-term memory, working memory, and cognitive processing speed) together with age that could have an effect on both cognitive and auditory constructs. The model was evaluated with structural equation modeling of the data from 213 adults ranging in age from 18 to 86 years.
Results: The model provided good a fit to the data. Regarding the auditory measures, temporal-order identification had the strongest effect on the cognitive functions, followed by weaker indirect effects for gap detection and nonsignificant effects for threshold. Regarding the cognitive measures, the association with audition was strongest for semantic long-term memory and working memory but weaker for episodic long-term memory and cognitive speed. Age had a very strong effect on threshold and cognitive speed, a moderate effect on temporal-order identification, episodic long-term memory, and working memory, a weak effect on gap detection, and nonsignificant, close to zero effect on semantic long-term memory.
Conclusions: The result shows that auditory temporal-order function has the strongest effect on cognition, which has implications both for which auditory concepts to include in cognitive hearing science experiments and for practitioners. The fact that the total effect of age was different for different aspects of cognition and partly mediated via auditory concepts is also discussed.

Benefit of Higher Maximum Force Output on Listening Effort in Bone-Anchored Hearing System Users: A Pupillometry Study

01-09-2019 – Bianchi, Federica; Wendt, Dorothea; Wassard, Christina; Maas, Patrick; Lunner, Thomas; Rosenbom, Tove; Holmberg, Marcus

Journal Article

Objectives: The aim of this study was to compare listening effort, as estimated via pupillary response, during a speech-in-noise test in bone-anchored hearing system (BAHS) users wearing three different sound processors. The three processors, Ponto Pro (PP), Ponto 3 (P3), and Ponto 3 Super
Power (P3SP), differ in terms of maximum force output (MFO) and MFO algorithm. The hypothesis was that listeners would allocate lower listening effort with the P3SP than with the PP, as a consequence of a higher MFO and, hence, fewer saturation artifacts in the signal.
Design: Pupil dilations were recorded in 21 BAHS users with a conductive or mixed hearing loss, during a speech-in-noise test performed at positive signal-to-noise ratios (SNRs), where the speech and noise levels were individually adjusted to lead to 95% correct intelligibility with the PP. The listeners had to listen to a sentence in noise, retain it for 3 seconds and then repeat it, while an eye-tracking camera recorded their pupil dilation. The three sound processors were tested in random order with a single-blinded experimental design. Two conditions were performed at the same SNR: Condition 1, where the speech level was designed to saturate the PP but not the P3SP, and condition 2, where the overall sound level was decreased relative to condition 1 to reduce saturation artifacts.
Results: The P3SP led to higher speech intelligibility than the PP in both conditions, while the performance with the P3 did not differ from the performance with the PP and the P3SP. Pupil dilations were analyzed in terms of both peak pupil dilation (PPD) and overall pupil dilation via growth curve analysis (GCA). In condition 1, a significantly lower PPD, indicating a decrease in listening effort, was obtained with the P3SP relative to the PP. The PPD obtained with the P3 did not differ from the PPD obtained with the other two sound processors. In condition 2, no difference in PPD was observed across the three processors. The GCA revealed that the overall pupil dilation was significantly lower, in both conditions, with both the P3SP and the P3 relative to the PP, and, in condition 1, also with the P3SP relative to the P3.
Conclusions: The overall effort to process a moderate to loud speech signal was significantly reduced by using a sound processor with a higher MFO (P3SP and P3), as a consequence of fewer saturation artifacts. These findings suggest that sound processors with a higher MFO may help BAHS users in their everyday listening scenarios, in particular in noisy environments, by improving sound quality and, thus, decreasing the amount of cognitive resources utilized to process incoming speech sounds.

A Longitudinal Analysis of Pressurized Wideband Absorbance Measures in Healthy Young Infants

01-09-2019 – Wali, Hamzah A; Mazlan, Rafidah; Kei, Joseph

Journal Article

Objectives: Wideband absorbance (WBA) is an emerging technology to evaluate the conductive pathway (outer and middle ear) in young infants. While a wealth of research has been devoted to measuring WBA at ambient pressure, few studies have investigated the use of pressurized WBA with this population. The purpose of this study was to investigate the effect of age on WBA measured under pressurized conditions in healthy infants from 0 to 6 months of age.
Design: Forty-four full-term healthy neonates (17 males and 27 females) participated in a longitudinal study. The neonates were assessed at 1-month intervals from 0 to 6 months of age using high-frequency tympanometry, acoustic stapedial reflex, distortion product otoacoustic emissions, and pressurized WBA. The values of WBA at tympanometric peak pressure (TPP) and 0 da
Pa across the frequencies from 0.25 to 8 k
Hz were analyzed as a function of age.
Results: A linear mixed model analysis, applied to the data, revealed significantly different WBA patterns among the age groups. In general, WBA measured at TPP and 0 da
Pa decreased at low frequencies (<0.4 k
Hz) and increased at high frequencies (2 to 5and 8 k
Hz) with age. Specifically, WBA measured at TPP and 0 da
Pa in 3- to 6-month-olds was significantly different from that of 0- to 2-month-olds at low (0.25 to 0.31 k
Hz) and high (2 to 5 and 8 k
Hz) frequencies. However, there were no significant differences between WBA measured at TPP and 0 da
Pa for infants from 3 to 6 months of age.
Conclusions: The present study provided clear evidence of maturation of the outer and middle ear system in healthy infants from birth to 6 months. Therefore, age-specific normative data of pressurized WBA are warranted.

Speech Envelope Enhancement Instantaneously Effaces Atypical Speech Perception in Dyslexia

01-09-2019 – Van Hirtum, Tilde; Moncada-Torres, Arturo; Ghesquière, Pol; Wouters, Jan

Journal Article

Objectives: Increasing evidence exists that poor speech perception abilities precede the phonological deficits typically observed in dyslexia, a developmental disorder in learning to read. Impaired processing of dynamic features of speech, such as slow amplitude fluctuations and transient acoustic cues, disrupts effortless tracking of the speech envelope and constrains the development of adequate phonological skills. In this study, a speech envelope enhancement (EE) strategy was implemented to reduce speech perception deficits by students with dyslexia. The EE emphasizes onset cues and reinforces the temporal structure of the speech envelope specifically.
Design: Speech perception was assessed in 42 students with and without dyslexia using a sentence repetition task in a speech-weighted background noise. Both natural and vocoded speech were used to assess the contribution of the temporal envelope on the speech perception deficit. Their envelope-enhanced counterparts were added to each baseline condition to administer the effect of the EE algorithm. In addition to speech-in-noise perception, general cognitive abilities were assessed.
Results: Results demonstrated that students with dyslexia not only benefit from EE but benefit more from it than typical readers. Hence, EE completely normalized speech reception thresholds for students with dyslexia under adverse listening conditions. In addition, a correlation between speech perception deficits and phonological processing was found for students with dyslexia, further supporting the relation between speech perception abilities and reading skills. Similar results and relations were found for conditions with natural and vocoded speech, providing evidence that speech perception deficits in dyslexia stem from difficulties in processing the temporal envelope.
Conclusions: Using speech EE, speech perception skills in students with dyslexia were improved passively and instantaneously, without requiring any explicit learning. In addition, the observed positive relationship between speech processing and advanced phonological skills opens new avenues for specific intervention strategies that directly target the potential core deficit in dyslexia.

Spectral-temporally modulated ripple test Lite for computeRless Measurement (SLRM): A Nonlinguistic Test for Audiology Clinics

01-09-2019 – Landsberger, David M.; Stupak, Natalia; Aronoff, Justin M.

Journal Article

Objectives: Many clinics are faced with the difficulty of evaluating performance in patients who speak a language for which there are no validated tests. It would be desirable to have a nonlinguistic method of evaluating these patients. Spectral ripple tests are nonlinguistic and highly correlated with speech identification performance. However, they are generally not amenable to clinical environments as they typically require the use of computers which are often not found in clinic sound booths. In this study, we evaluate the Spectral-temporally Modulated Ripple Test (SMRT) Lite for compute
Rless Measurement (SLRM), which is a new variant of the adaptive SMRT that can be implemented via a CD player.
Design: SMRT and SLRM were measured for 10 normal hearing and 10 cochlear implant participants.
Results: Performance on the two tests was highly correlated (r = 0.97).
Conclusions: The results suggest that SLRM can be used interchangeably with SMRT but can be implemented without a computer.

Associations Between Telomere Length and Hearing Status in Mid-Childhood and Midlife: Population-Based Cross-Sectional study

01-09-2019 – Wang, Jing; Nguyen, Minh Thien; Sung, Valerie; Grobler, Anneke; Burgner, David; Saffery, Richard; Wake, Melissa

Journal Article

Objectives: The purpose of this study is to determine if telomere length (a biomarker of aging) is associated with hearing acuity in mid-childhood and midlife.
Design: The study was based on the population-based cross-sectional study within the Longitudinal Study of Australian Children with telomere length and audiometry data. We calculated high Fletcher Index (h
FI, mean threshold of 1, 2, and 4 k
Hz), defining hearing loss as threshold >15 d
B HL (better ear). Linear and logistic regression analyses quantified associations of telomere length with continuous hearing threshold and binary hearing loss outcomes, respectively.
Results: One thousand one hundred ninety-five children (mean age 11.4 years, SD 0.5) and 1334 parents (mean age 43.9 years, SD 5.1) were included in analyses. Mean (SD) telomere length (T/S ratio) was 1.09 (0.55) for children and 0.81 (0.38) for adults; h
FI (d
B HL) was 8.0 (5.6) for children and 13.1 (7.0) for adults, with 8.4% and 25.9%, respectively, showing hearing loss.
Telomere length was not associated with hearing threshold or hearing loss in children (h
FI: OR, 0.99; 95% confidence interval, 0.55 to 1.78) or adults (h
FI: OR, 1.35; 95% confidence interval, 0.81 to 2.25).
Conclusions: Telomere length was not associated with hearing acuity in children or their midlife parents.