Ear and Hearing

Editorial: Preregistration and Open Science Practices in Hearing Science and Audiology: The Time Has Come

01-01-2020 – Svirsky, Mario A.

No abstract available

Psychometric Properties of Cognitive-Motor Dual-Task Studies With the Aim of Developing a Test Protocol for Persons With Vestibular Disorders: A Systematic Review

01-01-2020 – Danneels, Maya; Van Hecke, Ruth; Keppler, Hannah; Degeest, Sofie; Cambier, Dirk; van de Berg, Raymond; Van Rompaey, Vincent; Maes, Leen

Journal Article

Objectives: Patients suffering from vestibular disorders (VD) often present with impairments in cognitive domains such as visuospatial ability, memory, executive function, attention, and processing speed. These symptoms can be attributed to extensive vestibular projections throughout the cerebral cortex and subcortex on the one hand, and to increased cognitive-motor interference (CMI) on the other hand. CMI can be assessed by performing cognitive-motor dual-tasks (DTs). The existing literature on this topic is scarce and varies greatly when it comes to test protocol, type and degree of vestibular impairment, and outcome. To develop a reliable and sensitive test protocol for VD patients, an overview of the existing reliability and validity studies on DT paradigms will be given in a variety of populations, such as dementia, multiple sclerosis, Parkinson’s disease, stroke, and elderly.
Design: The systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. An extensive literature search on psychometric properties of cognitive-motor DTs was run on MEDLINE, Embase, and Cochrane Databases. The studies were assessed for eligibility by two independent researchers, and their methodological quality was subsequently evaluated using the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN).
Results and Conclusions: Thirty-three studies were included in the current review. Based on the reliability and validity calculations, including a static as well as dynamic motor task seems valuable in a DT protocol for VD patients. To evoke CMI maximally in this population, both motor tasks should be performed while challenging the vestibular cognitive domains. Out of the large amount of cognitive tasks employed in DT studies, a clear selection for each of these domains, except for visuospatial abilities, could be made based on this review. The use of the suggested DTs will give a more accurate and daily life representation of cognitive and motor deficiencies and their interaction in the VD population.

Improving Clinical Outcomes in Cochlear Implantation Using Glucocorticoid Therapy: A Review

01-01-2020 – Cortés Fuentes, Ignacio A.; Videhult Pierre, Pernilla; Engmér Berglin, Cecilia

Journal Article

Cochlear implant surgery is a successful procedure for auditory rehabilitation of patients with severe to profound hearing loss. However, cochlear implantation may lead to damage to the inner ear, which decreases residual hearing and alters vestibular function. It is now of increasing interest to preserve residual hearing during this surgery because this is related to better speech, music perception, and hearing in complex listening environments. Thus, different efforts have been tried to reduce cochlear implantation-related injury, including periprocedural glucocorticoids because of their anti-inflammatory properties. Different routes of administration have been tried to deliver glucocorticoids. However, several drawbacks still remain, including their systemic side effects, unknown pharmacokinetic profiles, and complex delivery methods. In the present review, we discuss the role of periprocedural glucocorticoid therapy to decrease cochlear implantation-related injury, thus preserving inner ear function after surgery. Moreover, we highlight the pharmacokinetic evidence and clinical outcomes which would sustain further interventions.

Middle Ear Muscle Reflex and Word Recognition in “Normal-Hearing” Adults: Evidence for Cochlear Synaptopathy?

01-01-2020 – Mepani, Anita M.; Kirk, Sarah A.; Hancock, Kenneth E.; Bennett, Kara; de Gruttola, Victor; Liberman, M. Charles; Maison, Stéphane F.

Journal Article

Objectives: Permanent threshold elevation after noise exposure, ototoxic drugs, or aging is caused by loss of sensory cells; however, animal studies show that hair cell loss is often preceded by degeneration of synapses between sensory cells and auditory nerve fibers. The silencing of these neurons, especially those with high thresholds and low spontaneous rates, degrades auditory processing and may contribute to difficulties in understanding speech in noise. Although cochlear synaptopathy can be diagnosed in animals by measuring suprathreshold auditory brainstem responses, its diagnosis in humans remains a challenge. In mice, cochlear synaptopathy is also correlated with measures of middle ear muscle (MEM) reflex strength, possibly because the missing high-threshold neurons are important drivers of this reflex. The authors hypothesized that measures of the MEM reflex might be better than other assays of peripheral function in predicting difficulties hearing in difficult listening environments in human subjects.
Design: The authors recruited 165 normal-hearing healthy subjects, between 18 and 63 years of age, with no history of ear or hearing problems, no history of neurologic disorders, and unremarkable otoscopic examinations. Word recognition in quiet and in difficult listening situations was measured in four ways: using isolated words from the Northwestern University auditory test number six corpus with either (a) 0 d
B signal to noise, (b) 45% time compression with reverberation, or (c) 65% time compression with reverberation, and (d) with a modified version of the Quick
SIN. Audiometric thresholds were assessed at standard and extended high frequencies. Outer hair cell function was assessed by distortion product otoacoustic emissions (DPOAEs). Middle ear function and reflexes were assessed using three methods: the acoustic reflex threshold as measured clinically, wideband tympanometry as measured clinically, and a custom wideband method that uses a pair of click probes flanking an ipsilateral noise elicitor. Other aspects of peripheral auditory function were assessed by measuring click-evoked gross potentials, that is, summating potential (SP) and action potential (AP) from ear canal electrodes.
Results: After adjusting for age and sex, word recognition scores were uncorrelated with audiometric or DPOAE thresholds, at either standard or extended high frequencies. MEM reflex thresholds were significantly correlated with scores on isolated word recognition, but not with the modified version of the Quick
SIN. The highest pairwise correlations were seen using the custom assay. AP measures were correlated with some of the word scores, but not as highly as seen for the MEM custom assay, and only if amplitude was measured from SP peak to AP peak, rather than baseline to AP peak. The highest pairwise correlations with word scores, on all four tests, were seen with the SP/AP ratio, followed closely by SP itself. When all predictor variables were combined in a stepwise multivariate regression, SP/AP dominated models for all four word score outcomes. MEM measures only enhanced the adjusted r2 values for the 45% time compression test. The only other predictors that enhanced model performance (and only for two outcome measures) were measures of interaural threshold asymmetry.
Conclusions: Results suggest that, among normal-hearing subjects, there is a significant peripheral contribution to diminished hearing performance in difficult listening environments that is not captured by either threshold audiometry or DPOAEs. The significant univariate correlations between word scores and either SP/AP, SP, MEM reflex thresholds, or AP amplitudes (in that order) are consistent with a type of primary neural degeneration. However, interpretation is clouded by uncertainty as to the mix of pre- and postsynaptic contributions to the click-evoked SP. None of the assays presented here has the sensitivity to diagnose neural degeneration on a case-by-case basis; however, these tests may be useful in longitudinal studies to track accumulation of neural degeneration in individual subjects.

Predicting Speech-in-Noise Deficits from the Audiogram

01-01-2020 – Shub, Daniel E.; Makashay, Matthew J.; Brungart, Douglas S.

Journal Article

Objectives: In occupations that involve hearing critical tasks, individuals need to undergo periodic hearing screenings to ensure that they have not developed hearing losses that could impair their ability to safely and effectively perform their jobs. Most periodic hearing screenings are limited to pure-tone audiograms, but in many cases, the ability to understand speech in noisy environments may be more important to functional job performance than the ability to detect quiet sounds. The ability to use audiometric threshold data to identify individuals with poor speech-in-noise performance is of particular interest to the U.
S. military, which has an ongoing responsibility to ensure that its service members (SMs) have the hearing abilities they require to accomplish their mission. This work investigates the development of optimal strategies for identifying individuals with poor speech-in-noise performance from the audiogram.
Design: Data from 5487 individuals were used to evaluate a range of classifiers, based exclusively on the pure-tone audiogram, for identifying individuals who have deficits in understanding speech in noise. The classifiers evaluated were based on generalized linear models (GLMs), the speech intelligibility index (SII), binary threshold criteria, and current standards used by the U.
S. military. The classifiers were evaluated in a detection theoretic framework where the sensitivity and specificity of the classifiers were quantified. In addition to the performance of these classifiers for identifying individuals with deficits understanding speech in noise, data from 500,733 U.
S. Army SMs were used to understand how the classifiers would affect the number of SMs being referred for additional testing.
Results: A classifier based on binary threshold criteria that was identified through an iterative search procedure outperformed a classifier based on the SII and ones based on GLMs with large numbers of fitted parameters. This suggests that the saturating nature of the SII is important, but that the weights of frequency channels are not optimal for identifying individuals with deficits understanding speech in noise. It is possible that a highly complicated model with many free parameters could outperform the classifiers considered here, but there was only a modest difference between the performance of a classifier based on a GLM with 26 fitted parameters and one based on a simple all-frequency pure-tone average. This suggests that the details of the audiogram are a relatively insensitive predictor of performance in speech-in-noise tasks.
Conclusions: The best classifier identified in this study, which was a binary threshold classifier derived from an iterative search process, does appear to reliably outperform the current thresholds criteria used by the U.
S. military to identify individuals with abnormally poor speech-in-noise performance, both in terms of fewer false alarms and a greater hit rate. Substantial improvements in the ability to detect SMs with impaired speech-in-noise performance can likely only be obtained by adding some form of speech-in-noise testing to the hearing monitoring program. While the improvements were modest, the overall benefit of adopting the proposed classifier is likely substantial given the number of SMs enrolled in U.
S. military hearing conservation and readiness programs.

Children With Congenital Unilateral Sensorineural Hearing Loss: Effects of Late Hearing Aid Amplification—A Pilot Study

01-01-2020 – Johansson, Marlin; Asp, Filip; Berninger, Erik

Journal Article

Objectives: Although children with unilateral hearing loss (u
HL) have high risk of experiencing academic difficulties, speech-language delays, poor sound localization, and speech recognition in noise, studies on hearing aid (HA) outcomes are few. Consequently, it is unknown when and how amplification is optimally provided. The aim was to study whether children with mild-to-moderate congenital unilateral sensorineural hearing loss (u
SNHL) benefit from HAs.
Design: All 6- to 11-year-old children with nonsyndromic congenital u
SNHL and at least 6 months of HA use were invited (born in Stockholm county council, n = 7). Participants were 6 children (9.7- to 10.8-years-old) with late HA fittings (>4.8 years of age). Unaided and aided hearing was studied with a comprehensive test battery in a within-subject design. Questionnaires were used to study overall hearing performance and disability. Sound localization accuracy (SLA) and speech recognition thresholds (SRTs) in competing speech were measured in sound field to study hearing under demanding listening conditions. SLA was measured by recording eye-gaze in response to auditory-visual stimuli presented from 12 loudspeaker–video display pairs arranged equidistantly within ±55° in the frontal horizontal plane. The SRTs were measured for target sentences at 0° in spatially separated (±30° and ±150°) continuous speech. Auditory brainstem responses (ABRs) were obtained in both ears separately to study auditory nerve function at the brainstem level.
Results: The mean ± SD pure-tone average (0.5, 1, 2, and 4 k
Hz) was 45 ± 8 d
B HL and 6 ± 4 d
B HL in the impaired and normal hearing ear, respectively (n = 6). Horizontal SLA was significantly poorer in the aided compared with unaided condition. A significant relationship was found between aided SLA (quantified by an error index) and the impaired ear’s ABR I to V interval, suggesting a relationship between the two. Results from questionnaires revealed aided benefit in one-to-one communication, whereas no significant benefit was found for communication in background noise or reverberation. No aided benefit was found for the SRTs in competing speech.
Conclusions: Children with congenital u
SNHL benefit from late HA intervention in one-to-one communication but not in demanding listening situations, and there is a risk of degraded SLA. The results indicate that neural transmission time from the impaired cochlea to the upper brainstem may have an important role in unilaterally aided spatial hearing, warranting further study in children with u
HL receiving early HA intervention.

Otitis Media in Childhood and Disease in Adulthood: A 40-Year Follow-Up Study

01-01-2020 – Aarhus, Lisa; Homøe, Preben; Engdahl, Bo

Journal Article

Objectives: The pathogenesis of chronic suppurative otitis media (CSOM) includes complex interactions between microbial, immunologic, and genetic factors. To our knowledge, no study has focused on the association between childhood otitis media, immune regulation, inflammatory conditions, and chronic disease in adulthood. The present study aims to assess whether CSOM in childhood predicts immune-related inflammatory disorders or cardiovascular disease in adulthood. Another aim is to assess the association with oto-vestibular diseases in adulthood.
Design: Population cohort study in Norway comprised 51,626 participants (mean age 52 years) who underwent a hearing investigation at 7 to 13 years of age where 189 were diagnosed with CSOM (otorhinolaryngologist diagnose) and 51,437 had normal hearing thresholds (controls). Data on adult disease were obtained from the Norwegian Patient Registry (ICD-10 codes from the specialist health services). We estimated associations with logistic regression analyses.
Results: The associations between CSOM in childhood and disease in adulthood were as follows: chronic sinusitis (odds ratio 3.13, 95% confidence interval 1.15 to 8.52); cardiovascular disease (1.38, 1.01 to 1.88); hearing loss (5.58, 3.78 to 8.22); tinnitus (2.62, 1.07 to 6.41). The adult hearing loss among cases with childhood CSOM was most frequently registered as sensorineural. There was no statistically significant increased risk of later asthma (1.84 0.98 to 3.48), inflammatory bowel disease, inflammatory joint disease, systemic tissue disease, or vestibulopathy. The estimates were adjusted for age, sex, socio-economic status, and smoking.
Conclusion: Our large cohort study, which is the first to focus on the link between otitis media in childhood and immune-related inflammatory disorders later in life, does not confer a clear association. CSOM in childhood was strongly related to adult tinnitus and hearing loss, which was most frequently registered as sensorineural.

The Use of Static and Dynamic Cues for Vowel Identification by Children Wearing Hearing Aids or Cochlear Implants

01-01-2020 – Hedrick, Mark; Thornton, Kristen E. T.; Yeager, Kelly; Plyler, Patrick; Johnstone, Patti; Reilly, Kevin; Springer, Cary

Journal Article

Objective: To examine vowel perception based on dynamic formant transition and/or static formant pattern cues in children with hearing loss while using their hearing aids or cochlear implants. We predicted that the sensorineural hearing loss would degrade formant transitions more than static formant patterns, and that shortening the duration of cues would cause more difficulty for vowel identification for these children than for their normal-hearing peers.
Design: A repeated-measures, between-group design was used. Children 4 to 9 years of age from a university hearing services clinic who were fit for hearing aids (13 children) or who wore cochlear implants (10 children) participated. Chronologically age-matched children with normal hearing served as controls (23 children). Stimuli included three naturally produced syllables (/ba/, /bi/, and /bu/), which were presented either in their entirety or segmented to isolate the formant transition or the vowel static formant center. The stimuli were presented to listeners via loudspeaker in the sound field. Aided participants wore their own devices and listened with their everyday settings. Participants chose the vowel presented by selecting from corresponding pictures on a computer screen.
Results: Children with hearing loss were less able to use shortened transition or shortened vowel centers to identify vowels as compared to their normal-hearing peers. Whole syllable and initial transition yielded better identification performance than the vowel center for /ɑ/, but not for /i/ or /u/.
Conclusions: The children with hearing loss may require a longer time window than children with normal hearing to integrate vowel cues over time because of altered peripheral encoding in spectrotemporal domains. Clinical implications include cognizance of the importance of vowel perception when developing habilitative programs for children with hearing loss.

The Effect of Hearing-Protection Devices on Auditory Situational Awareness and Listening Effort

01-01-2020 – Smalt, Christopher J.; Calamia, Paul T.; Dumas, Andrew P.; Perricone, Joseph P.; Patel, Tejash; Bobrow, Johanna; Collins, Paula P.; Markey, Michelle L.; Quatieri, Thomas F.

Journal Article

Objectives: Hearing-protection devices (HPDs) are made available, and often are required, for industrial use as well as military training exercises and operational duties. However, these devices often are disliked, and consequently not worn, in part because they compromise situational awareness through reduced sound detection and localization performance as well as degraded speech intelligibility. In this study, we carried out a series of tests, involving normal-hearing subjects and multiple background-noise conditions, designed to evaluate the performance of four HPDs in terms of their modifications of auditory-detection thresholds, sound-localization accuracy, and speech intelligibility. In addition, we assessed their impact on listening effort to understand how the additional effort required to perceive and process auditory signals while wearing an HPD reduces available cognitive resources for other tasks.
Design: Thirteen normal-hearing subjects participated in a protocol, which included auditory tasks designed to measure detection and localization performance, speech intelligibility, and cognitive load. Each participant repeated the battery of tests with unoccluded ears and four hearing protectors, two active (electronic) and two passive. The tasks were performed both in quiet and in background noise.
Results: Our findings indicate that, in variable degrees, all of the tested HPDs induce performance degradation on most of the conducted tasks as compared to the open ear. Of particular note in this study is the finding of increased cognitive load or listening effort, as measured by visual reaction time, for some hearing protectors during a dual-task, which added working-memory demands to the speech-intelligibility task.
Conclusions: These results indicate that situational awareness can vary greatly across the spectrum of HPDs, and that listening effort is another aspect of performance that should be considered in future studies. The increased listening effort induced by hearing protectors may lead to earlier cognitive fatigue in noisy environments. Further study is required to characterize how auditory performance is limited by the combination of hearing impairment and the use of HPDs, and how the effects of such limitations can be linked to safe and effective use of hearing protection to maximize job performance.

The Revised Hearing Handicap Inventory and Screening Tool Based on Psychometric Reevaluation of the Hearing Handicap Inventories for the Elderly and Adults

01-01-2020 – Cassarly, Christy; Matthews, Lois J.; Simpson, Annie N.; Dubno, Judy R.

Journal Article

Objectives: The present study evaluates the items of the Hearing Handicap Inventory for the Elderly and Hearing Handicap Inventory for Adults (HHIE/A) using Mokken scale analysis (MSA), a type of nonparametric item response theory, and develops updated tools with optimal psychometric properties.
Design: In a longitudinal study of age-related hearing loss, 1447 adults completed the HHIE/A and audiometric testing at baseline. Discriminant validity of the emotional consequences and social/situational effects subscales of the HHIE/A was assessed, and nonparametric item response theory was used to explore dimensionality of the items of the HHIE/A and to refine the scales.
Results: The HHIE/A items form strong unidimensional scales measuring self-perceived hearing handicap, but with a lack of discriminant validity of the two distinct subscales. Two revised scales, the 18-item Revised Hearing Handicap Inventory and the 10-item Revised Hearing Handicap Inventory—Screening, were developed from the common items of the original HHIE/A that met the assumptions of MSA. The items on both of the revised scales can be ordered in terms of increasing difficulty.
Conclusions: The results of the present study suggest that the newly developed Revised Hearing Handicap Inventory and Revised Hearing Handicap Inventory—Screening are strong unidimensional, clinically informative measures of self-perceived hearing handicap that can be used for adults of all ages. The real-data example also demonstrates that MSA is a valuable alternative to classical psychometric analysis.

Electro-Tactile Stimulation Enhances Cochlear-Implant Melody Recognition: Effects of Rhythm and Musical Training

01-01-2020 – Huang, Juan; Lu, Thomas; Sheffield, Benjamin; Zeng, Fan-Gang

Objectives: Electro-acoustic stimulation (EAS) enhances speech and music perception in cochlear-implant (CI) users who have residual low-frequency acoustic hearing. For CI users who do not have low-frequency acoustic hearing, tactile stimulation may be used in a similar fashion as residual low-frequency acoustic hearing to enhance CI performance. Previous studies showed that electro-tactile stimulation (ETS) enhanced speech recognition in noise and tonal language perception for CI listeners. Here, we examined the effect of ETS on melody recognition in both musician and nonmusician CI users.
Design: Nine musician and eight nonmusician CI users were tested in a melody recognition task with or without rhythmic cues in three testing conditions: CI only (E), tactile only (T), and combined CI and tactile stimulation (ETS).
Results: Overall, the combined electrical and tactile stimulation enhanced the melody recognition performance in CI users by 9% points. Two additional findings were observed. First, musician CI users outperformed nonmusicians CI users in melody recognition, but the size of the enhancement effect was similar between the two groups. Second, the ETS enhancement was significantly higher with nonrhythmic melodies than rhythmic melodies in both groups.
Conclusions: These findings suggest that, independent of musical experience, the size of the ETS enhancement depends on integration efficiency between tactile and auditory stimulation, and that the mechanism of the ETS enhancement is improved electric pitch perception. The present study supports the hypothesis that tactile stimulation can be used to improve pitch perception in CI users.

Genetic Inheritance of Late-Onset, Down-Sloping Hearing Loss and Its Implications for Auditory Rehabilitation

01-01-2020 – Song, Mee Hyun; Jung, Jinsei; Rim, John Hoon; Choi, Hye Ji; Lee, Hack June; Noh, Byunghwa; Lee, Jun Suk; Gee, Heon Yung; Choi, Jae Young

Journal Article

Objectives: Late-onset, down-sloping sensorineural hearing loss has many genetic and nongenetic etiologies, but the proportion of this commonly encountered type of hearing loss attributable to genetic causes is not well known. In this study, the authors performed genetic analysis using next-generation sequencing techniques in patients showing late-onset, down-sloping sensorineural hearing loss with preserved low-frequency hearing, and investigated the clinical implications of the variants identified.
Design: From a cohort of patients with hearing loss at a tertiary referral hospital, 18 unrelated probands with down-sloping sensorineural hearing loss of late onset were included in this study. Down-sloping hearing loss was defined as a mean low-frequency threshold at 250 Hz and 500 Hz less than or equal to 40 d
B HL and a mean high-frequency threshold at 1, 2, and 4 k
Hz greater than 40 d
B HL. The authors performed whole-exome sequencing and segregation analysis to identify the genetic causes and evaluated the outcomes of auditory rehabilitation in the patients.
Results: There were nine simplex and nine multiplex families included, in which the causative variants were found in six of 18 probands, demonstrating a detection rate of 33.3%. Various types of variants, including five novel and three known variants, were detected in the MYH14, MYH9, USH2A, COL11A2, and TMPRSS3 genes. The outcome of cochlear and middle ear implants in patients identified with pathogenic variants was satisfactory. There was no statistically significant difference between pathogenic variant-positive and pathogenic variant-negative groups in terms of onset age, family history of hearing loss, pure-tone threshold, or speech discrimination scores.
Conclusions: The proportion of patients with late-onset, down-sloping hearing loss identified with potentially causative variants was unexpectedly high. Identification of the causative variants will offer insights on hearing loss progression and prognosis regarding various modes of auditory rehabilitation, as well as possible concomitant syndromic features.

Use of Commercial Virtual Reality Technology to Assess Verticality Perception in Static and Dynamic Visual Backgrounds

01-01-2020 – Zaleski-King, Ashley; Pinto, Robin; Lee, General; Brungart, Douglas

Journal Article

Objectives: The Subjective Visual Vertical (SVV) test and the closely related Rod and Disk Test (RDT) are measures of perceived verticality measured in static and dynamic visual backgrounds. However, the equipment used for these tests is variable across clinics and is often too expensive or too primitive to be appropriate for widespread use. Commercial virtual reality technology, which is now widely available, may provide a more suitable alternative for collecting these measures in clinical populations. This study was designed to investigate verticality perception in symptomatic patients using a modified RDT paradigm administered through a head-mounted display (HMD).
Design: A group of adult patients referred by a physician for vestibular testing based on the presence of dizziness symptoms and a group of healthy adults without dizziness symptoms were included. We investigated degree of visual dependence in both groups by measuring SVV as a function of kinematic changes to the visual background.
Results: When a dynamic background was introduced into the HMD to simulate the RDT, significantly greater shifts in SVV were found for the patient population than for the control population. In patients referred for vestibular testing, the SVV measured with the HMD was significantly correlated with traditional measures of SVV collected in a rotary chair when accounting for head tilt.
Conclusions: This study provides initial proof of concept evidence that reliable SVV measures in static and dynamic visual backgrounds can be obtained using a low-cost commercial HMD system. This initial evidence also suggests that this tool can distinguish individuals with dizziness symptomatology based on SVV performance in dynamic visual backgrounds.

Impact of Lexical Parameters and Audibility on the Recognition of the Freiburg Monosyllabic Speech Test

01-01-2020 – Winkler, Alexandra; Carroll, Rebecca; Holube, Inga

Journal Article

Objective: Correct word recognition is generally determined by audibility, but lexical parameters also play a role. The focus of this study was to examine both the impact of audibility and lexical parameters on speech recognition of test words of the clinical German Freiburg monosyllabic speech test, and subsequently on the perceptual imbalance of test lists observed in the literature.
Design: For 160 participants with normal hearing that were divided into three groups with different simulated hearing thresholds, monaural speech recognition for the Freiburg monosyllabic speech test was obtained via headphones in quiet at different presentation levels. A software manipulated the original speech material to simulate two different hearing thresholds. All monosyllables were classified according to their frequency of occurrence in contemporary language and the number of lexical neighbors using the Cross-Linguistic Easy-Access Resource for Phonological and Orthographic Neighborhood Density database. Generalized linear mixed-effects regression models were used to evaluate the influences of audibility in terms of the Speech Intelligibility Index and lexical properties of the monosyllables in terms of word frequency (WF) and neighborhood density (ND) on the observed speech recognition per word and per test list, respectively.
Results: Audibility and interactions of audibility with WF and ND correctly predicted identification of the individual monosyllables. Test list recognition was predicted by test list choice, audibility, and ND, as well as by interactions of WF and test list, audibility and ND, ND and test list, and audibility per test list.
Conclusions: Observed differences in speech recognition of the Freiburg monosyllabic speech test, which are well reported in the literature, depend not only on audibility but also on WF, neighborhood density, and test list choice and their interactions. The authors conclude that future creations of speech test material should take these lexical parameters into account.

Prediction Model for Audiological Outcomes in Patients With GJB2 Mutations

01-01-2020 – Chen, Pey-Yu; Lin, Yin-Hung; Liu, Tien-Chen; Lin, Yi-Hsin; Tseng, Li-Hui; Yang, Ting-Hua; Chen, Pei-Lung; Wu, Chen-Chi; Hsu, Chuan-Jen

Journal Article

Objectives: Recessive mutations in GJB2 are the most common genetic cause of sensorineural hearing impairment (SNHI) in humans. SNHI related to GJB2 mutations demonstrates a wide variation in audiological features, and there has been no reliable prediction model for hearing outcomes until now. The objectives of this study were to clarify the predominant factors determining hearing outcome and to establish a predictive model for SNHI in patients with GJB2 mutations.
Design: A total of 434 patients confirmed to have biallelic GJB2 mutations were enrolled and divided into three groups according to their GJB2 genotypes. Audiological data, including hearing levels and audiogram configurations, were compared between patients with different genotypes. Univariate and multivariate generalized estimating equation (GEE) analyses were performed to analyze longitudinal data of patients with multiple audiological records.
Results: Of the 434 patients, 346 (79.7%) were homozygous for the GJB2 p.
V37I mutation, 55 (12.7%) were compound heterozygous for p.
V37I and another GJB2 mutation, and 33 (7.6%) had biallelic GJB2 mutations other than p.
V37I. There was a significant difference in hearing level and the distribution of audiogram configurations between the three groups. Multivariate GEE analyses on 707 audiological records of 227 patients revealed that the baseline hearing level and the duration of follow-up were the predominant predictors of hearing outcome, and that hearing levels in patients with GJB2 mutations could be estimated based on these two parameters: (Predicted Hearing Level d
BHL) = 3.78 + 0.96 × (Baseline Hearing Level d
BHL) + 0.55 × (Duration of Follow-Up y).
Conclusion: The baseline hearing level and the duration of follow-up are the main prognostic factors for outcome of GJB2-related SNHI. These findings may have important clinical implications in guiding follow-up protocols and designing treatment plans in patients with GJB2 mutations.

Test-Retest Variability in the Characteristics of Envelope Following Responses Evoked by Speech Stimuli

01-01-2020 – Easwar, Vijayalakshmi; Scollie, Susan; Aiken, Steven; Purcell, David

Journal Article

Objectives: The objective of the present study was to evaluate the between-session test-retest variability in the characteristics of envelope following responses (EFRs) evoked by modified natural speech stimuli in young normal hearing adults.
Design: EFRs from 22 adults were recorded in two sessions, 1 to 12 days apart. EFRs were evoked by the token /susa∫ i/ (2.05 sec) presented at 65 d
B SPL and recorded from the vertex referenced to the neck. The token /susa∫ i/, spoken by a male with an average fundamental frequency f0 of 98.53 Hz, was of interest because of its potential utility as an objective hearing aid outcome measure. Each vowel was modified to elicit two EFRs simultaneously by lowering the f0 in the first formant while maintaining the original f0 in the higher formants. Fricatives were amplitude-modulated at 93.02 Hz and elicited one EFR each. EFRs evoked by vowels and fricatives were estimated using Fourier analyzer and discrete Fourier transform, respectively. Detection of EFRs was determined by an F-test. Test-retest variability in EFR amplitude and phase coherence were quantified using correlation, repeated-measures analysis of variance, and the repeatability coefficient. The repeatability coefficient, computed as twice the standard deviation (SD) of test-retest differences, represents the ±95% limits of test-retest variation around the mean difference. Test-retest variability of EFR amplitude and phase coherence were compared using the coefficient of variation, a normalized metric, which represents the ratio of the SD of repeat measurements to its mean. Consistency in EFR detection outcomes was assessed using the test of proportions.
Results: EFR amplitude and phase coherence did not vary significantly between sessions, and were significantly correlated across repeat measurements. The repeatability coefficient for EFR amplitude ranged from 38.5 n
V to 45.6 n
V for all stimuli, except for /∫/ (71.6 n
V). For any given stimulus, the test-retest differences in EFR amplitude of individual participants were not correlated with their test-retest differences in noise amplitude. However, across stimuli, higher repeatability coefficients of EFR amplitude tended to occur when the group mean noise amplitude and the repeatability coefficient of noise amplitude were higher. The test-retest variability of phase coherence was comparable to that of EFR amplitude in terms of the coefficient of variation, and the repeatability coefficient varied from 0.1 to 0.2, with the highest value of 0.2 for /∫/. Mismatches in EFR detection outcomes occurred in 11 of 176 measurements. For each stimulus, the tests of proportions revealed a significantly higher proportion of matched detection outcomes compared to mismatches.
Conclusions: Speech-evoked EFRs demonstrated reasonable repeatability across sessions. Of the eight stimuli, the shortest stimulus /∫/ demonstrated the largest variability in EFR amplitude and phase coherence. The test-retest variability in EFR amplitude could not be explained by test-retest differences in noise amplitude for any of the stimuli. This lack of explanation argues for other sources of variability, one possibility being the modulation of cortical contributions imposed on brainstem-generated EFRs.

Long-Term Sensorineural Hearing Loss in Patients With Blast-Induced Tympanic Membrane Perforations

01-01-2020 – Littlefield, Philip D.; Brungart, Douglas S.

Objective: To describe characteristics of sensorineural hearing loss (SNHL) in patients with blast-induced tympanic membrane (TM) perforations that required surgery.
Design: A retrospective review of hearing outcomes in those who had tympanoplasty for combat blast-induced TM perforations. These were sequential cases from one military otolaryngologist from 2007 to 2012. A total of 87 patients were reviewed, and of those, 49 who had appropriate preinjury, preoperative, and long-term audiograms were included. Those with pre-existing hearing loss were excluded. Preinjury audiograms were used to assess how sensorineural thresholds changed in the ruptured ears, and in the contralateral ear in those with unilateral perforations.
Results: The mean time from injury to the final postoperative audiogram was 522 days. In the ears with TM perforations, 70% had SNHLs of 10 d
B or less (by bone conduction pure tone averages). Meanwhile, approximately 8% had threshold shifts >30 d
B, averaging 50 d
B. The strongest predictor of severe or profound hearing loss was ossicular discontinuity. Thresholds also correlated with bilateral injury and perforation size. In those with unilateral perforations, the SNHL was almost always larger on the side with the perforation. Those with SNHL often had a low-to-mid frequency threshold shift and, in general, audiograms that were flatter across frequencies than those of a typical population of military personnel with similar levels of overall hearing loss.
Conclusions: There is a bimodal distribution of hearing loss in those who experience a blast exposure severe enough to perforate at least one TM. Most ears recover close to their preinjury thresholds, but a minority experience much larger sensorineural threshold shifts. Blast exposed ears also tend to have a flatter audiogram than most service members with similar levels of hearing loss.

Synchrotron Radiation-Based Reconstruction of the Human Spiral Ganglion: Implications for Cochlear Implantation

01-01-2020 – Li, Hao; Schart-Morén, Nadine; Rohani, Seyed Alireza; Ladak, Hanif M.; Rask-Andersen, Helge; Agrawal, Sumit

Journal Article

Objective: To three-dimensionally reconstruct Rosenthal’s canal (RC) housing the human spiral ganglion (SG) using synchrotron radiation phase-contrast imaging (SR-PCI). Straight cochlear implant electrode arrays were inserted to better comprehend the electro-cochlear interface in cochlear implantation (CI).
Design: SR-PCI was used to reconstruct the human cochlea with and without cadaveric CI. Twenty-eight cochleae were volume rendered, of which 12 underwent cadaveric CI with a straight electrode via the round window (RW). Data were input into the 3D Slicer software program and anatomical structures were modeled using a threshold paint tool.
Results: The human RC and SG were reproduced three-dimensionally with artefact-free imaging of electrode arrays. The anatomy of the SG and its relationship to the sensory organ (Corti) and soft and bony structures were assessed.
Conclusions: SR-PCI and computer-based three-dimensional reconstructions demonstrated the relationships among implanted electrodes, angular insertion depths, and the SG for the first time in intact, unstained, and nondecalcified specimens. This information can be used to assess stimulation strategies and future electrode designs, as well as create place-frequency maps of the SG for optimal stimulation strategies of the human auditory nerve in CI.

Children With Normal Hearing Are Efficient Users of Fundamental Frequency and Vocal Tract Length Cues for Voice Discrimination

01-01-2020 – Zaltz, Yael; Goldsworthy, Raymond L.; Eisenberg, Laurie S.; Kishon-Rabin, Liat

Journal Article

Background: The ability to discriminate between talkers assists listeners in understanding speech in a multitalker environment. This ability has been shown to be influenced by sensory processing of vocal acoustic cues, such as fundamental frequency (F0) and formant frequencies that reflect the listener’s vocal tract length (VTL), and by cognitive processes, such as attention and memory. It is, therefore, suggested that children who exhibit immature sensory and/or cognitive processing will demonstrate poor voice discrimination (VD) compared with young adults. Moreover, greater difficulties in VD may be associated with spectral degradation as in children with cochlear implants.
Objectives: The aim of this study was as follows: (1) to assess the use of F0 cues, VTL cues, and the combination of both cues for VD in normal-hearing (NH) school-age children and to compare their performance with that of NH adults; (2) to assess the influence of spectral degradation by means of vocoded speech on the use of F0 and VTL cues for VD in NH children; and (3) to assess the contribution of attention, working memory, and nonverbal reasoning to performance.
Design: Forty-one children, 8 to 11 years of age, were tested with nonvocoded stimuli. Twenty-one of them were also tested with eight-channel, noise-vocoded stimuli. Twenty-one young adults (18 to 35 years) were tested for comparison. A three-interval, three-alternative forced-choice paradigm with an adaptive tracking procedure was used to estimate the difference limens (DLs) for VD when F0, VTL, and F0 + VTL were manipulated separately. Auditory memory, visual attention, and nonverbal reasoning were assessed for all participants.
Results: (a) Children’ F0 and VTL discrimination abilities were comparable to those of adults, suggesting that most school-age children utilize both cues effectively for VD. (b) Children’s VD was associated with trail making test scores that assessed visual attention abilities and speed of processing, possibly reflecting their need to recruit cognitive resources for the task. (c) Best DLs were achieved for the combined (F0 + VTL) manipulation for both children and adults, suggesting that children at this age are already capable of integrating spectral and temporal cues. (d) Both children and adults found the VTL manipulations more beneficial for VD compared with the F0 manipulations, suggesting that formant frequencies are more reliable for identifying a specific speaker than F0. (e) Poorer DLs were achieved with the vocoded stimuli, though the children maintained similar thresholds and pattern of performance among manipulations as the adults.
Conclusions: The present study is the first to assess the contribution of F0, VTL, and the combined F0 + VTL to the discrimination of speakers in school-age children. The findings support the notion that many NH school-age children have effective spectral and temporal coding mechanisms that allow sufficient VD, even in the presence of spectrally degraded information. These results may challenge the notion that immature sensory processing underlies poor listening abilities in children, further implying that other processing mechanisms contribute to their difficulties to understand speech in a multitalker environment. These outcomes may also provide insight into VD processes of children under listening conditions that are similar to cochlear implant users.

The Effects of GJB2 or SLC26A4 Gene Mutations on Neural Response of the Electrically Stimulated Auditory Nerve in Children

01-01-2020 – Luo, Jianfen; Xu, Lei; Chao, Xiuhua; Wang, Ruijie; Pellittieri, Angela; Bai, Xiaohui; Fan, Zhaomin; Wang, Haibo; He, Shuman

Journal Article

Objectives: This study aimed to (1) investigate the effect of GJB2 and SLC26A4 gene mutations on auditory nerve function in pediatric cochlear implant users and (2) compare their results with those measured in implanted children with idiopathic hearing loss.
Design: Participants included 20 children with biallelic GJB2 mutations, 16 children with biallelic SLC26A4 mutations, and 19 children with idiopathic hearing loss. All subjects except for two in the SLC26A4 group had concurrent Mondini malformation and enlarged vestibular aqueduct. All subjects used Cochlear Nucleus devices in their test ears. For each subject, electrophysiological measures of the electrically evoked compound action potential (e
CAP) were recorded using both anodic- and cathodic-leading biphasic pulses. Dependent variables (DVs) of interest included slope of e
CAP input/output (I/O) function, the e
CAP threshold, and e
CAP amplitude measured at the maximum comfortable level (C level) of the anodic-leading stimulus (i.e., the anodic C level). Slopes of e
CAP I/O functions were estimated using statistical modeling with a linear regression function. These DVs were measured at three electrode locations across the electrode array. Generalized linear mixed effect models were used to evaluate the effects of study group, stimulus polarity, and electrode location on each DV.
Results: Steeper slopes of e
CAP I/O function, lower e
CAP thresholds, and larger e
CAP amplitude at the anodic C level were measured for the anodic-leading stimulus compared with the cathodic-leading stimulus in all subject groups. Children with GJB2 mutations showed steeper slopes of e
CAP I/O function and larger e
CAP amplitudes at the anodic C level than children with SLC26A4 mutations and children with idiopathic hearing loss for both the anodic- and cathodic-leading stimuli. In addition, children with GJB2 mutations showed a smaller increase in e
CAP amplitude when the stimulus changed from the cathodic-leading pulse to the anodic-leading pulse (i.e., smaller polarity effect) than children with idiopathic hearing loss. There was no statistically significant difference in slope of e
CAP I/O function, e
CAP amplitude at the anodic C level, or the size of polarity effect on all three DVs between children with SLC26A4 mutations and children with idiopathic hearing loss. These results suggested that better auditory nerve function was associated with GJB2 but not with SLC26A4 mutations when compared with idiopathic hearing loss. In addition, significant effects of electrode location were observed for slope of e
CAP I/O function and the e
CAP threshold.
Conclusions: GJB2 and SLC26A4 gene mutations did not alter polarity sensitivity of auditory nerve fibers to electrical stimulation. The anodic-leading stimulus was generally more effective in activating auditory nerve fibers than the cathodic-leading stimulus, despite the presence of GJB2 or SLC26A4 mutations. Patients with GJB2 mutations appeared to have better functional status of the auditory nerve than patients with SLC26A4 mutations who had concurrent Mondini malformation and enlarged vestibular aqueduct and patients with idiopathic hearing loss.

Switching Streams Across Ears to Evaluate Informational Masking of Speech-on-Speech

01-01-2020 – Calcus, Axelle; Schoof, Tim; Rosen, Stuart; Shinn-Cunningham, Barbara; Souza, Pamela

Journal Article

Objectives: This study aimed to evaluate the informational component of speech-on-speech masking. Speech perception in the presence of a competing talker involves not only informational masking (IM) but also a number of masking processes involving interaction of masker and target energy in the auditory periphery. Such peripherally generated masking can be eliminated by presenting the target and masker in opposite ears (dichotically). However, this also reduces IM by providing listeners with lateralization cues that support spatial release from masking (SRM). In tonal sequences, IM can be isolated by rapidly switching the lateralization of dichotic target and masker streams across the ears, presumably producing ambiguous spatial percepts that interfere with SRM. However, it is not clear whether this technique works with speech materials.
Design: Speech reception thresholds (SRTs) were measured in 17 young normal-hearing adults for sentences produced by a female talker in the presence of a competing male talker under three different conditions: diotic (target and masker in both ears), dichotic, and dichotic but switching the target and masker streams across the ears. Because switching rate and signal coherence were expected to influence the amount of IM observed, these two factors varied across conditions. When switches occurred, they were either at word boundaries or periodically (every 116 msec) and either with or without a brief gap (84 msec) at every switch point. In addition, SRTs were measured in a quiet condition to rule out audibility as a limiting factor.
Results: SRTs were poorer for the four switching dichotic conditions than for the nonswitching dichotic condition, but better than for the diotic condition. Periodic switches without gaps resulted in the worst SRTs compared to the other switch conditions, thus maximizing IM.
Conclusions: These findings suggest that periodically switching the target and masker streams across the ears (without gaps) was the most efficient in disrupting SRM. Thus, this approach can be used in experiments that seek a relatively pure measure of IM, and could be readily extended to translational research.

The Ototoxic Potential of Cobalt From Metal-on-Metal Hip Implants: Objective Auditory and Vestibular Outcome

01-01-2020 – Leyssens, Laura; Vinck, Bart; Van Der Straeten, Catherine; De Smet, Koen; Dhooge, Ingeborg; Wuyts, Floris L.; Keppler, Hannah; Degeest, Sofie; Valette, Romain; Lim, Rebecca; Maes, Leen

Journal Article

Objectives: During the past decade, the initial popularity of metal-on-metal (Mo
M) hip implants has shown a progressive decline due to increasingly reported implant failure and revision surgeries. Local as well as systemic toxic side effects have been associated with excessive metal ion release from implants, in which cobalt (Co) plays an important role. The rare condition of systemic cobaltism seems to manifest as a clinical syndrome with cardiac, endocrine, and neurological symptoms, including hearing loss, tinnitus, and imbalance. In most cases described in the literature, revision surgery and the subsequent drop in blood Co level led to (partial) alleviation of the symptoms, suggesting a causal relationship with Co exposure. Moreover, the ototoxic potential of Co has recently been demonstrated in animal experiments. Since its ototoxic potential in humans is merely based on anecdotal case reports, the current study aimed to prospectively and objectively examine the auditory and vestibular function in patients implanted with a Mo
M hip prosthesis.
Design: Twenty patients (15 males and 5 females, aged between 33 and 65 years) implanted with a primary Mo
M hip prosthesis were matched for age, gender, and noise exposure to 20 non-implanted control subjects. Each participant was subjected to an extensive auditory (conventional and high-frequency pure tone audiometry, transient evoked and distortion product otoacoustic emissions TEOAEs and DPOAEs, auditory brainstem responses ABR) and vestibular test battery (cervical and ocular vestibular evoked myogenic potentials cVEMPs and oVEMPs, rotatory test, caloric test, video head impulse test v
HIT), supplemented with a blood sample collection to determine the plasma Co concentration.
Results: The median interquartile range plasma Co concentration was 1.40 0.70, 6.30 µg/L in the Mo
M patient group and 0.19 0.09, 0.34 µg/L in the control group. Within the auditory test battery, a clear trend was observed toward higher audiometric thresholds (11.2 to 16 k
Hz), lower DPOAE (between 4 and 8 k
Hz), and total TEOAE (1 to 4 k
Hz) amplitudes, and a higher interaural latency difference for wave V of the ABR in the patient versus control group (0.01 ≤ p < 0.05). Within the vestibular test battery, considerably longer cVEMP P1 latencies, higher oVEMP amplitudes (0.01 ≤ p < 0.05), and lower asymmetry ratio of the v
HIT gain (p < 0.01) were found in the Mo
M patients. In the patient group, no suggestive association was observed between the plasma Co level and the auditory or vestibular outcome parameters.
Conclusions: The auditory results seem to reflect signs of Co-induced damage to the hearing function in the high frequencies. This corresponds to previous findings on drug-induced ototoxicity and the recent animal experiments with Co, which identified the basal cochlear outer hair cells as primary targets and indicated that the cellular mechanisms underlying the toxicity might be similar. The vestibular outcomes of the current study are inconclusive and require further elaboration, especially with respect to animal studies. The lack of a clear dose–response relationship may question the clinical relevance of our results, but recent findings in Mo
M hip implant patients have confirmed that this relationship can be complicated by many patient-specific factors.