Impact of Bilateral Vestibulopathy on Spatial and Nonspatial Cognition: A Systematic Review
01-07-2019 – Dobbels, Bieke; Peetermans, Olivier; Boon, Bram; Mertens, Griet; Van de Heyning, Paul; Van Rompaey, Vincent
Objectives: Hearing loss is considered an independent risk factor for dementia. Growing evidence in animal and human studies suggest that not only hearing loss but also vestibular loss might result in cognitive deficits. The objective of this study is to evaluate the presence of spatial and nonspatial cognitive deficits in patients with bilateral vestibulopathy. As different causes of bilateral vestibulopathy are associated with hearing loss, the objective is to evaluate if these cognitive deficits are due to the vestibular loss of the patients with bilateral vestibulopathy, or to their hearing loss, or both.
Design: We performed a systematic review according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. (1) Data sources: MEDLINE and the Cochrane Library. (2) Study selection: Cross-sectional studies investigating cognitive performances in human patients with bilateral vestibulopathy confirmed by quantitative vestibular testing. (3) Data extraction: Independent extraction of articles by three authors using predefined data fields, including patient- and control characteristics and cognitive outcomes.
Results: Ten studies reporting on 126 patients with bilateral vestibulopathy matched the inclusion criteria. Cognitive domains evaluated in patients with bilateral vestibulopathy included visuospatial abilities, memory, language, attention, and executive function. In only three studies, hearing performance of the included patients was briefly described. Nearly all studies demonstrated a significant impairment of spatial cognition in patients with bilateral vestibulopathy. In the few papers investigating nonspatial cognition, worse outcome was demonstrated in patients with bilateral vestibular loss performing cognitive tasks assessing attentional performance, memory, and executive function.
Conclusions: Strong evidence exists that patients with bilateral vestibulopathy suffer from impaired spatial cognition. Recent studies even suggest impairment in other cognitive domains than spatial cognition. However, in all previous studies, conclusions on the link between cognitive performance and vestibular loss were drawn without taken hearing loss into consideration as a possible cause of the cognitive impairment.
Benefits of Cochlear Implantation for Single-Sided Deafness: Data From the House Clinic-University of Southern California-University of California, Los Angeles Clinical Trial
01-07-2019 – Galvin, John J. III; Fu, Qian-Jie; Wilkinson, Eric P.; Mills, Dawna; Hagan, Suzannah C.; Lupo, J. Eric; Padilla, Monica; Shannon, Robert V.
Objectives: Cochlear implants (CIs) have been shown to benefit patients with single-sided deafness (SSD) in terms of tinnitus reduction, localization, speech understanding, and quality of life (QoL). While previous studies have shown cochlear implantation may benefit SSD patients, it is unclear which point of comparison is most relevant: baseline performance before implantation versus performance with normal-hearing (NH) ear after implantation. In this study, CI outcomes were assessed in SSD patients before and up to 6 mo postactivation. Benefits of cochlear implantation were assessed relative to binaural performance before implantation or relative to performance with the NH ear alone after implantation.
Design: Here, we report data for 10 patients who completed a longitudinal, prospective, Food and Drug Administration–approved study of cochlear implantation for SSD patients. All subjects had severe to profound unilateral hearing loss in one ear and normal hearing in the other ear. All patients were implanted with the MED-EL CONCERTO Flex 28 device. Speech understanding in quiet and in noise, localization, and tinnitus severity (with the CI on or off) were measured before implantation (baseline) and at 1, 3, 6 mo postactivation of the CI processor. Performance was measured with both ears (binaural), the CI ear alone, and the NH ear alone (the CI ear was plugged and muffed). Tinnitus severity, dizziness severity, and QoL were measured using questionnaires administered before implantation and 6 mo postactivation.
Results: Significant CI benefits were observed for tinnitus severity, localization, speech understanding, and QoL. The degree and time course of CI benefit depended on the outcome measure and the reference point. Relative to binaural baseline performance, significant and immediate (1 mo postactivation) CI benefits were observed for tinnitus severity and speech performance in noise, but localization did not significantly improve until 6 mo postactivation; questionnaire data showed significant improvement in QoL 6 mo postactivation. Relative to NH-only performance after implantation, significant and immediate benefits were observed for tinnitus severity and localization; binaural speech understanding in noise did not significantly improve during the 6-mo study period, due to variability in NH-only performance. There were no correlations between behavioral and questionnaire data, except between tinnitus visual analog scale scores at 6 mo postactivation and Tinnitus Functional Index scores at 6 mo postactivation.
Conclusions: The present behavioral and subjective data suggest that SSD patients greatly benefit from cochlear implantation. However, to fully understand the degree and time course of CI benefit, the outcome measure and point of comparison should be considered. From a clinical perspective, binaural baseline performance is a relevant point of comparison. The lack of correlation between behavioral and questionnaire data suggest that represent independent measures of CI benefit for SSD patients.
Noise Exposure May Diminish the Musician Advantage for Perceiving Speech in Noise
01-07-2019 – Skoe, Erika; Camera, Sarah; Tufts, Jennifer
Objective: Although numerous studies have shown that musicians have better speech perception in noise (SPIN) compared to nonmusicians, other studies have not replicated the “musician advantage for SPIN.” One factor that has not been adequately addressed in previous studies is how musicians’ SPIN is affected by routine exposure to high levels of sound. We hypothesized that such exposure diminishes the musician advantage for SPIN.
Design: Environmental sound levels were measured continuously for 1 week via body-worn noise dosimeters in 56 college students with diverse musical backgrounds and clinically normal pure-tone audiometric averages. SPIN was measured using the Quick Speech in Noise Test (Quick
SIN). Multiple linear regression modeling was used to examine how music practice (years of playing a musical instrument) and routine noise exposure predict Quick
Results: Noise exposure and music practice were both significant predictors of Quick
SIN, but they had opposing influences, with more years of music practice predicting better Quick
SIN scores and greater routine noise exposure predicting worse Quick
SIN scores. Moreover, mediation analysis suggests that noise exposure suppresses the relationship between music practice and Quick
Conclusions: Our findings suggest a beneficial relationship between music practice and SPIN that is suppressed by noise exposure.
Factors Associated With Successful Setup of a Self-Fitting Hearing Aid and the Need for Personalized Support
01-07-2019 – Convery, Elizabeth; Keidser, Gitte; Hickson, Louise; Meyer, Carly
Objectives: Self-fitting hearing aids have the potential to increase the accessibility of hearing health care. The aims of this study were to (1) identify factors that are associated with the ability to successfully set up a pair of commercially available self-fitting hearing aids; 2) identify factors that are associated with the need for knowledgeable, personalized support in performing the self-fitting procedure; and (3) evaluate performance of the individual steps in the self-fitting procedure.
Design: Sixty adults with hearing loss between the ages of 51 and 85 took part in the study. Half of the participants were current users of bilateral hearing aids; the other half had no previous hearing aid experience. At the first appointment, participants underwent assessments of health locus of control, hearing aid self-efficacy, cognitive status, problem-solving skills, demographic characteristics, and hearing thresholds. At the second appointment, participants followed a set of computer-based instructions accompanied by video clips to self-fit the hearing aids. The self-fitting procedure required participants to customize the physical fit of the hearing aids, insert the hearing aids into the ear, perform self-directed in situ audiometry, and adjust the resultant settings according to their preference. Participants had access to support with the self-fitting procedure from a trained clinical assistant (CA) at all times.
Results: Forty-one (68%) of the participants achieved a successful self-fitting. Participants who self-fit successfully were significantly more likely than those who were unsuccessful to have had previous experience with hearing aids and to own a mobile device (when controlling for four potential covariates). Of the 41 successful self-fitters, 15 (37%) performed the procedure independently and 26 (63%) sought support from the CA. The successful self-fitters who sought CA support were more likely than those who self-fit independently to have a health locus of control that is externally oriented toward powerful others. Success rates on the individual steps in the self-fitting procedure were relatively high. No one step was more problematic than any other, nor was there a systematic tendency for particular participants to make more errors than others. Steps that required use of the hearing aids in conjunction with the self-fitting app on the participant’s mobile device had the highest rates of support use.
Conclusions: The findings of this study suggest that nonaudiologic factors should be considered when selecting suitable candidates for the self-fitting hearing aids evaluated in this study. Although computer-based instructions and video clips were shown to improve self-fitting skill acquisition relative to past studies in which printed instruction booklets were used, the majority of people are still likely to require access to support from trained personnel while carrying out the self-fitting procedure, especially when this requires the use of an app.
Efficacy and Effectiveness of Advanced Hearing Aid Directional and Noise Reduction Technologies for Older Adults With Mild to Moderate Hearing Loss
01-07-2019 – Wu, Yu-Hsiang; Stangl, Elizabeth; Chipara, Octav; Hasan, Syed Shabih; DeVries, Sean; Oleson, Jacob
Objectives: The purpose of the present study was to investigate the laboratory efficacy and real-world effectiveness of advanced directional microphones (DM) and digital noise reduction (NR) algorithms (i.e., premium DM/NR features) relative to basic-level DM/NR features of contemporary hearing aids (HAs). The study also examined the effect of premium HAs relative to basic HAs and the effect of DM/NR features relative to no features.
Design: Fifty-four older adults with mild-to-moderate hearing loss completed a single-blinded crossover trial. Two HA models, one a less-expensive, basic-level device (basic HA) and the other a more-expensive, advanced-level device (premium HA), were used. The DM/NR features of the basic HAs (i.e., basic features) were adaptive DMs and gain-reduction NR with fewer channels. In contrast, the DM/NR features of the premium HAs (i.e., premium features) included adaptive DMs and gain-reduction NR with more channels, bilateral beamformers, speech-seeking DMs, pinna-simulation directivity, reverberation reduction, impulse NR, wind NR, and spatial NR. The trial consisted of four conditions, which were factorial combinations of HA model (premium versus basic) and DM/NR feature status (on versus off). To blind participants regarding the HA technology, no technology details were disclosed and minimal training on how to use the features was provided. In each condition, participants wore bilateral HAs for 5 weeks. Outcomes regarding speech understanding, listening effort, sound quality, localization, and HA satisfaction were measured using laboratory tests, retrospective self-reports (i.e., standardized questionnaires), and in-situ self-reports (i.e., self-reports completed in the real world in real time). A smartphone-based ecological momentary assessment system was used to collect in-situ self-reports.
Results: Laboratory efficacy data generally supported the benefit of premium DM/NR features relative to basic DM/NR, premium HAs relative to basic HAs, and DM/NR features relative to no DM/NR in improving speech understanding and localization performance. Laboratory data also indicated that DM/NR features could improve listening effort and sound quality compared with no features for both basic- and premium-level HAs. For real-world effectiveness, in-situ self-reports first indicated that noisy or very noisy situations did not occur very often in participants’ daily lives (10.9% of the time). Although both retrospective and in-situ self-reports indicated that participants were more satisfied with HAs equipped with DM/NR features than without, there was no strong evidence to support the benefit of premium DM/NR features and premium HAs over basic DM/NR features and basic HAs, respectively.
Conclusions: Although premium DM/NR features and premium HAs outperformed their basic-level counterparts in well-controlled laboratory test conditions, the benefits were not observed in the real world. In contrast, the effect of DM/NR features relative to no features was robust both in the laboratory and in the real world. Therefore, the present study suggests that although both premium and basic DM/NR technologies evaluated in the study have the potential to improve HA outcomes, older adults with mild-to-moderate hearing loss are unlikely to perceive the additional benefits provided by the premium DM/NR features in their daily lives. Limitations concerning the study’s generalizability (e.g., participant’s lifestyle) are discussed.
A Laboratory Evaluation of Contextual Factors Affecting Ratings of Speech in Noise: Implications for Ecological Momentary Assessment
01-07-2019 – Jenstad, Lorienne M.; Gillen, Lise; Singh, Gurjit; DeLongis, Anita; Pang, Flora
Objectives: As hearing aid outcome measures move from retrospective to momentary assessments, it is important to understand how contextual factors influence subjective ratings. Under laboratory-controlled conditions, we examined whether subjective ratings changed as a function of acoustics, response timing, and task variables.
Design: Eighteen adults (age 21 to 85 years; M = 51.4) with sensorineural hearing loss were fitted with hearing aids. Sentences in noise were presented at 3 overall levels (50, 65, and 80 d
B SPL) and 3 signal to noise ratios (0, +5, and +10 d
B signal to noise ratio SNR). Listeners rated three sound quality dimensions (intelligibility, noisiness, and loudness) under four experimental conditions that manipulated timing and task focus.
Results: The quality ratings changed as the acoustics changed: intelligibility ratings increased with input level (p 0.1).
Conclusions: The findings of this laboratory study provide evidence to support the conclusion that group-mean listener ratings of loudness, noisiness, and intelligibility change in predictable ways as level and SNR of the speech in noise stimulus are altered. They also provide weak evidence to support the conclusion that timing of the ratings (during or immediately after sound exposure) can affect noisiness ratings under certain conditions, but no evidence to support the conclusion that timing affects other quality ratings. There is also no evidence to support the conclusion that quality ratings are influenced by the presence of, or focus on, a secondary nonauditory task of the type used here.
Intracochlear Electrocochleography: Response Patterns During Cochlear Implantation and Hearing Preservation
01-07-2019 – Giardina, Christopher K.; Brown, Kevin D.; Adunka, Oliver F.; Buchman, Craig A.; Hutson, Kendall A.; Pillsbury, Harold C.; Fitzpatrick, Douglas C.
Objectives: Electrocochleography (ECoch
G) obtained through a cochlear implant (CI) is increasingly being tested as an intraoperative monitor during implantation with the goal of reducing surgical trauma. Reducing trauma should aid in preserving residual hearing and improve speech perception overall. The purpose of this study was to characterize intracochlear ECoch
G responses throughout insertion in a range of array types and, when applicable, relate these measures to hearing preservation. The ECoch
G signal in cochlear implant subjects is complex, consisting of hair cell and neural generators with differing distributions depending on the etiology and history of hearing loss. Consequently, a focus was to observe and characterize response changes as an electrode advances.
Design: In 36 human subjects, responses to 90 d
HL tone bursts were recorded both at the round window (RW) and then through the apical contact of the CI as the array advanced into the cochlea. The specific setup used a sterile clip in the surgical field, attached to the ground of the implant with a software-controlled short to the apical contact. The end of the clip was then connected to standard audiometric recording equipment. The stimuli were 500 Hz tone bursts at 90 d
HL. Audiometry for cases with intended hearing preservation (12/36 subjects) was correlated with intraoperative recordings.
Results: Successful intracochlear recordings were obtained in 28 subjects. For the eight unsuccessful cases, the clip introduced excessive line noise, which saturated the amplifier. Among the successful subjects, the initial intracochlear response was a median 5.8 d
B larger than the response at the RW. Throughout insertion, modiolar arrays showed median response drops after stylet removal while in lateral wall arrays the maximal median response magnitude was typically at the deepest insertion depth. Four main patterns of response magnitude were seen: increases > 5 d
B (12/28), steady responses within 5 d
B (4/28), drops > 5 d
B (from the initial response) at shallow insertion depths ( 5 d
B occurring at deeper depths (5/28). Hearing preservation, defined as 0.57, maximum of 0.80 for the maximal response).
Conclusions: Monitoring the cochlea with intracochlear ECoch
G during cochlear implantation is feasible, and patterns of response vary by device type. Changes in magnitude alone did not account for hearing preservation rates, but considerations of phase, latency, and neural contribution can help to interpret the changes seen and improve sensitivity and specificity. The correlation between the absolute magnitude obtained either before or during insertion of the ECoch
G and the hearing threshold changes suggest that cochlear health, which varies by subject, plays an important role.
Electric-Acoustic Stimulation Outcomes in Children
01-07-2019 – Park, Lisa R.; Teagle, Holly F. B.; Gagnon, Erika; Woodard, Jennifer; Brown, Kevin D.
Objectives: This study investigates outcomes in children fit with electric-acoustic stimulation (EAS) and addresses three main questions: (1) Are outcomes with EAS superior to outcomes with conventional electric-only stimulation in children? (2) Do children with residual hearing benefit from EAS and conventional electric-only stimulation when compared with the preoperative hearing aid (HA) condition? (3) Can children with residual hearing derive benefit from EAS after several years of listening with conventional electric-only stimulation?Design: Sixteen pediatric cochlear implant (CI) recipients between 4 and 16 years of age with an unaided low-frequency pure tone average of 75 d
B HL in the implanted ear were included in two study arms. Arm 1 included new recipients, and Arm 2 included children with at least 1 year of CI experience. Using a within-subject design, participants were evaluated unilaterally with the Consonant-Nucleus-Consonant (CNC) word list in quiet and the Baby Bio at a +5 d
B SNR using an EAS program and a conventional full electric (FE) program. Arm 1 participants’ scores were also compared with preoperative scores.
Results: Speech perception outcomes were statistically higher with the EAS program than the FE program. For new recipients, scores were significantly higher with EAS than preoperative HA scores for both the CNC and Baby Bio in noise; however, after 6 months of device use, results in the FE condition were not significantly better than preoperative scores. Long-term FE users benefited from EAS over their FE programs based on CNC word scores.
Conclusions: Whether newly implanted or long-term CI users, children with residual hearing after CI surgery can benefit from EAS. Cochlear implantation with EAS fitting is a viable option for children with HAs who have residual hearing but have insufficient access to high-frequency sounds and poor speech perception.
Comparing the International Classification of Functioning, Disability, and Health Core Sets for Hearing Loss and Otorhinolaryngology/Audiology Intake Documentation at Mayo Clinic
01-07-2019 – Alfakir, Razan; van Leeuwen, Lisette M.; Pronk, Marieke; Kramer, Sophia E.; Zapala, David A.
Objectives: The International Classification of Functioning, Disability, and Health (ICF) Core Sets for Hearing Loss (CSHL) consists of short lists of categories from the entire ICF classification that are thought to be the most relevant for describing the functioning of persons with hearing loss. A comprehensive intake that covers all factors included in the ICF CSHL holds the promise of developing a tailored treatment plan that fully complements the patient’s needs. The Comprehensive CSHL contains 117 categories and serves as a guide for multiprofessional, comprehensive assessment. The Brief CSHL includes 27 of the 117 categories and represents the minimal spectrum of functioning of persons with HL for single-discipline encounters or clinical trials. The authors first sought to benchmark the extent to which Audiologist (AUD) and Otorhinolaryngologist (ORL) discipline-specific intake documentation, as well as Mayo Clinic’s multidisciplinary intake documentation, captures ICF CSHL categories.
Design: A retrospective study design including 168 patient records from the Department of Otorhinolaryngology/Audiology of Mayo Clinic in Jacksonville, Florida. Anonymized intake documentation forms and reports were selected from patient records filed between January 2016 and May 2017. Data were extracted from the intake documentation forms and reports and linked to ICF categories using pre-established linking rules. “Overlap,” defined as the percentage of ICF CSHL categories represented in the intake documentation, was calculated across document types. In addition, extra non–ICF CSHL categories (ICF categories that are not part of the CSHL) and extra constructs (constructs that are not part of the ICF classification) found in the patient records were described.
Results: The total overlap of multidisciplinary intake documentation with ICF CSHL categories was 100% for the Brief CSHL and 50% for the Comprehensive CSHL. Brief CSHL overlap for discipline-specific documentation fell short at 70% for both AUD and ORL. Important extra non–ICF CSHL categories were identified and included “sleep function” and “motor-related functions and activities,” which mostly were reported in relation to tinnitus and vestibular disorders.
Conclusion: The multidisciplinary intake documentation of Mayo Clinic showed 100% overlap with the Brief CSHL, while important areas of nonoverlap were identified in AUD- and ORL-specific reports. The ICF CSHL provides a framework for describing each hearing-impaired individual’s unique capabilities and needs in ways currently not documented by audiological and otological evaluations, potentially setting the stage for more effective individualized patient care. Efforts to further validate the ICF CSHL may require the involvement of multidisciplinary institutions with commonly shared electronic health records to adequately capture the breath of the ICF CSHL.
Factors Affecting Sound-Source Localization in Children With Simultaneous or Sequential Bilateral Cochlear Implants
01-07-2019 – Killan, Catherine; Scally, Andrew; Killan, Edward; Totten, Catherine; Raine, Christopher
Objectives: The study aimed to determine the effect of interimplant interval and onset of profound deafness on sound localization in children with bilateral cochlear implants, controlling for cochlear implant manufacturer, age, and time since second implant.
Design: The authors conducted a retrospective, observational study using routinely collected clinical data. Participants were 127 bilaterally implanted children aged 4 years or older, tested at least 12 mo post- second implant. Children used implants made by one of three manufacturers. Sixty-five children were simultaneously implanted, of whom 43% were congenitally, bilaterally profoundly deaf at 2 and 4 k
Hz and 57% had acquired or progressive hearing loss. Sixty-two were implanted sequentially (median interimplant interval = 58 mo, range 3–143 mo) of whom 77% had congenital and 23% acquired or progressive bilateral profound deafness at 2 and 4 k
Hz. Children participated in a sound-source localization test with stimuli presented in a random order from five loudspeakers at –60, –30, 0, +30, and +60 degrees azimuth. Stimuli were prerecorded female voices at randomly roved levels from 65 to 75 d
B(A). Root mean square (RMS) errors were calculated. Localization data were analyzed via multivariable linear regression models, one applied to the whole group and the other to just the simultaneously implanted children.
Results: Mean RMS error was 25.4 degrees (SD = 12.5 degrees) with results ranging from perfect accuracy to chance level (0–62.7 degrees RMS error). Compared with simultaneous implantation, an interimplant interval was associated with worse localization by 1.7 degrees RMS error per year (p < 0.001). Compared with congenital deafness, each year with hearing thresholds better than 90 d
B HL at 2 and 4 k
Hz bilaterally before implantation led to more accurate localization by 1.3 degrees RMS error (p < 0.005). Every year post-second implant led to better accuracy by 1.6 degrees RMS error (p < 0.05). Med-El was associated with more accurate localization than Cochlear by 5.8 degrees RMS error (p < 0.01) and with more accurate localization than Advanced Bionics by 9.2 degrees RMS error (p < 0.05).
Conclusions: Interimplant interval and congenital profound hearing loss both led to worse accuracy in sound-source localization for children using bilateral cochlear implants. Interimplant delay should therefore be minimized for children with bilateral profound hearing loss. Children presenting with acquired or progressive hearing loss can be expected to localize better via bilateral cochlear implants than their congenitally deaf peers.
Normalizing cVEMPs: Which Method Is the Most Effective?
01-07-2019 – van Tilburg, Mark J.; Herrmann, Barbara S.; Rauch, Steven D.; Noij, Kimberley; Guinan, John J. Jr
Objectives: To determine the most effective method for normalizing cervical vestibular evoked myogenic potentials (cVEMPs).
Design: cVEMP data from 20 subjects with normal hearing and vestibular function were normalized using 16 combinations of methods, each using one of the 4 modes of electromyogram (EMG) quantification described below. All methods used the peak to peak value of an averaged cVEMP waveform (VEMPpp) and obtained a normalized cVEMP by dividing VEMPpp by a measure of the EMG amplitude. EMG metrics were obtained from the EMG within short- and long-duration time windows. EMG amplitude was quantified by its root-mean-square (RMS) or average full-wave-rectified (RECT) value. The EMG amplitude was used by (a) dividing each individual trace by the EMG of this specific trace, (b) dividing VEMPpp by the average RMS or RECT of the individual trace EMG, (c) dividing the VEMPpp by an EMG metric obtained from the average cVEMP waveform, or (d) dividing the VEMPpp by an EMG metric obtained from an average cVEMP “noise” waveform. Normalization methods were compared by the normalized cVEMP coefficient of variation across subjects and by the area under the curve from a receiver-operating-characteristic analysis. A separate analysis of the effect of EMG-window duration was done.
Results: There were large disparities in the results from different normalization methods. The best methods used EMG metrics from individual-trace EMG measurements, not from part of the average cVEMP waveform. EMG quantification by RMS or RECT produced similar results. For most EMG quantifications, longer window durations were better in producing receiver-operating-characteristic with high areas under the curve. However, even short window durations worked well when the EMG metric was calculated from the average RMS or RECT of the individual-trace EMGs. Calculating the EMG from a long-duration window of a cVEMP “noise” average waveform was almost as good as the individual-trace-EMG methods.
Conclusions: The best cVEMP normalizations use EMG quantification from individual-trace EMGs. To have the normalized cVEMPs accurately reflect the vestibular activation, a good normalization method needs to be used.
Speech-in-Noise and Quality-of-Life Measures in School-Aged Children With Normal Hearing and With Unilateral Hearing Loss
01-07-2019 – Griffin, Amanda M.; Poissant, Sarah F.; Freyman, Richard L.
Objectives: (1) Measure sentence recognition in co-located and spatially separated target and masker configurations in school-aged children with unilateral hearing loss (UHL) and with normal hearing (NH). (2) Compare self-reported hearing-related quality-of-life (QoL) scores in school-aged children with UHL and NH.
Design: Listeners were school-aged children (6 to 12 yrs) with permanent UHL (n = 41) or NH (n = 35) and adults with NH (n = 23). Sentence reception thresholds (SRTs) were measured using Hearing In Noise Test–Children sentences in quiet and in the presence of 2-talker child babble or a speech-shaped noise masker in target/masker spatial configurations: 0/0, 0/−60, 0/+60, or 0/±60 degrees azimuth. Maskers were presented at a fixed level of 55 d
BA, while the level of the target sentences varied adaptively to estimate the SRT. Hearing-related QoL was measured using the Hearing Environments and Reflection on Quality of Life (HEAR-QL-26) questionnaire for child subjects.
Results: As a group, subjects with unaided UHL had higher (poorer) SRTs than age-matched peers with NH in all listening conditions. Effects of age, masker type, and spatial configuration of target and masker signals were found. Spatial release from masking was significantly reduced in conditions where the masker was directed toward UHL subjects’ normal-hearing ear. Hearing-related QoL scores were significantly poorer in subjects with UHL compared to those with NH. Degree of UHL, as measured by four-frequency pure-tone average, was significantly correlated with SRTs only in the two conditions where the masker was directed towards subjects’ normal-hearing ear, although the unaided Speech Intelligibility Index at 65 d
B SPL was significantly correlated with SRTs in four conditions, some of which directed the masker to the impaired ear or both ears. Neither pure-tone average nor unaided Speech Intelligibility Index was correlated with QoL scores.
Conclusions: As a group, school-aged children with UHL showed substantial reductions in masked speech perception and hearing-related QoL, irrespective of sex, laterality of hearing loss, and degree of hearing loss. While some children demonstrated normal or near-normal performance in certain listening conditions, a disproportionate number of thresholds fell in the poorest decile of the NH data. These findings add to the growing literature challenging the past assumption that one ear is “good enough.”
Early Sentence Recognition in Adult Cochlear Implant Users
01-07-2019 – James, Chris J.; Karoui, Chadlia; Laborde, Marie-Laurence; Lepage, Benoît; Molinier, Charles-Édouard; Tartayre, Marjorie; Escudé, Bernard; Deguine, Olivier; Marx, Mathieu; Fraysse, Bernard
Objective: Normal-hearing subjects listening to acoustic simulations of cochlear implants (CI) can obtain sentence recognition scores near 100% in quiet and in 10 d
B signal-to-noise ratio (SNR) noise with acute exposure. However, average sentence recognition scores for real CI listeners are generally lower, even after months of experience, and there is a high degree of heterogeneity. Our aim was to identify the relative importance and strength of factors that prevent CI listeners from achieving early, 1-mo scores as high as those for normal-hearing-listener acoustic simulations.
Design: Sentence recognition scores (100 words/list, 65 d
B SPL) using CI alone were collected for all adult unilateral CI listeners implanted in our center over a 5-yr period. Sentence recognition scores in quiet and in 10 d
B SNR 8-talker babble, collected from 1 to 12 mo, were reduced to a single dependent variable, the “initial” score, via logarithmic regression. “Initial” scores equated to an improved estimate of 1-mo scores, and integrated the time to rise above zero score for poorer performing subjects. Demographic, device, and medical data were collected for 118 subjects who met standard CI candidacy criteria. Computed tomography of the electrode array allowing determination of the insertion depth as an angle, and the presence or absence of scala dislocation was available for 96 subjects. Predictive factors for initial scores were selected using stepwise multiple linear regression. The relative importance of predictive factors was estimated as partial r2 with a low bias method, and statistical significance tested with type II analysis of variance.
Results: The etiologies chronic otitis and autoimmune disease were associated with lower, widely variable sentence recognition scores in the long-term. More than 60% of CI listeners scored >50/100 in quiet at 1 mo. Congenital hearing loss was associated with significantly lower initial scores in quiet (r2 0.23, p 80/100 even at 1 day after activation. Insertion depths of 360° were estimated to produce frequency-place mismatches of about one octave upward shift.
Conclusions: Patient-related factors etiology and duration of deafness together explained ~40% of the variance in early sentence recognition scores, and electrode position factors ~20%. CI listeners with insertion depths of about one turn obtained the highest early sentence recognition scores in quiet and in noise, and these were comparable with those reported in the literature for normal-hearing subjects listening to 8 to 12 channel vocoder simulations. Differences between device brands were largely explained by differences in insertion depths. This indicates that physiological frequency-place mismatches of about one octave are rapidly accommodated by CI users for understanding sentences, between 1 day to 1 mo postactivation, and that channel efficiency may be significantly poorer for more deeply positioned electrode contacts.
Online Machine Learning Audiometry
01-07-2019 – Barbour, Dennis L.; Howard, Rebecca T.; Song, Xinyu D.; Metzger, Nikki; Sukesan, Kiron A.; DiLorenzo, James C.; Snyder, Braham R. D.; Chen, Jeff Y.; Degen, Eleanor A.; Buchbinder, Jenna M.; Heisey, Katherine L.
Objectives: A confluence of recent developments in cloud computing, real-time web audio and machine learning psychometric function estimation has made wide dissemination of sophisticated turn-key audiometric assessments possible. The authors have combined these capabilities into an online (i.e., web-based) pure-tone audiogram estimator intended to empower researchers and clinicians with advanced hearing tests without the need for custom programming or special hardware. The objective of this study was to assess the accuracy and reliability of this new online machine learning audiogram method relative to a commonly used hearing threshold estimation technique also implemented online for the first time in the same platform.
Design: The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 19 and 79 years (mean 41, SD 21) exhibiting a wide range of hearing abilities. For each ear, two repetitions of online machine learning audiogram estimation and two repetitions of online modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist using the online software tools. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 k
Results: The two threshold estimation methods delivered very similar threshold estimates at standard audiogram frequencies. Specifically, the mean absolute difference between threshold estimates was 3.24 ± 5.15 d
B. The mean absolute differences between repeated measurements of the online machine learning procedure and between repeated measurements of the Hughson-Westlake procedure were 2.85 ± 6.57 d
B and 1.88 ± 3.56 d
B, respectively. The machine learning method generated estimates of both threshold and spread (i.e., the inverse of psychometric slope) continuously across the entire frequency range tested from fewer samples on average than the modified Hughson-Westlake procedure required to estimate six discrete thresholds.
Conclusions: Online machine learning audiogram estimation in its current form provides all the information of conventional threshold audiometry with similar accuracy and reliability in less time. More importantly, however, this method provides additional audiogram details not provided by other methods. This standardized platform can be readily extended to bone conduction, masking, spectrotemporal modulation, speech perception, etc., unifying audiometric testing into a single comprehensive procedure efficient enough to become part of the standard audiologic workup.
Developmental Effects in Children’s Ability to Benefit From F0 Differences Between Target and Masker Speech
01-07-2019 – Flaherty, Mary M.; Buss, Emily; Leibold, Lori J.
Objectives: The objectives of this study were to (1) evaluate the extent to which school-age children benefit from fundamental frequency (F0) differences between target words and competing two-talker speech, and (2) assess whether this benefit changes with age. It was predicted that while children would be more susceptible to speech-in-speech masking compared to adults, they would benefit from differences in F0 between target and masker speech. A second experiment was conducted to evaluate the relationship between frequency discrimination thresholds and the ability to benefit from target/masker differences in F0.
Design: Listeners were children (5 to 15 years) and adults (20 to 36 years) with normal hearing. In the first experiment, speech reception thresholds (SRTs) for disyllabic words were measured in a continuous, 60-d
B SPL two-talker speech masker. The same male talker produced both the target and masker speech (average F0 = 120 Hz). The level of the target words was adaptively varied to estimate the level associated with 71% correct identification. The procedure was a four-alternative forced-choice with a picture-pointing response. Target words either had the same mean F0 as the masker or it was shifted up by 3, 6, or 9 semitones. To determine the benefit of target/masker F0 separation on word recognition, masking release was computed by subtracting thresholds in each shifted-F0 condition from the threshold in the unshifted-F0 condition. In the second experiment, frequency discrimination thresholds were collected for a subset of listeners to determine whether sensitivity to F0 differences would be predictive of SRTs. The standard was the syllable /ba/ with an F0 of 250 Hz; the target stimuli had a higher F0. Discrimination thresholds were measured using a three-alternative, three-interval forced choice procedure.
Results: Younger children (5 to 12 years) had significantly poorer SRTs than older children (13 to 15 years) and adults in the unshifted-F0 condition. The benefit of F0 separations generally increased with increasing child age and magnitude of target/masker F0 separation. For 5- to 7-year-olds, there was a small benefit of F0 separation in the 9-semitone condition only. For 8- to 12-year-olds, there was a benefit from both 6- and 9-semitone separations, but to a lesser degree than what was observed for older children (13 to 15 years) and adults, who showed a substantial benefit in the 6- and 9-semitone conditions. Examination of individual data found that children younger than 7 years of age did not benefit from any of the F0 separations tested. Results for the frequency discrimination task indicated that, while there was a trend for improved thresholds with increasing age, these thresholds were not predictive of the ability to use F0 differences in the speech-in-speech recognition task after controlling for age.
Conclusions: The overall pattern of results suggests that children’s ability to benefit from F0 differences in speech-in-speech recognition follows a prolonged developmental trajectory. Younger children are less able to capitalize on differences in F0 between target and masker speech. The extent to which individual children benefitted from target/masker F0 differences was not associated with their frequency discrimination thresholds.
A New Speech, Spatial, and Qualities of Hearing Scale Short-Form: Factor, Cluster, and Comparative Analyses
01-07-2019 – Moulin, Annie; Vergne, Judith; Gallego, Stéphane; Micheyl, Christophe
Objectives: The objective of this work was to build a 15-item short-form of the Speech Spatial and Qualities of Hearing Scale (SSQ) that maintains the three-factor structure of the full form, using a data-driven approach consistent with internationally recognized procedures for short-form building. This included the validation of the new short-form on an independent sample and an in-depth, comparative analysis of all existing, full and short SSQ forms.
Design: Data from a previous study involving 98 normal-hearing (NH) individuals and 196 people with hearing impairments (HI), non hearing aid wearers, along with results from several other published SSQ studies, were used for developing the short-form. Data from a new and independent sample of 35 NH and 88 HI hearing aid wearers were used to validate the new short-form. Factor and hierarchical cluster analyses were used to check the factor structure and internal consistency of the new short-form. In addition, the new short-form was compared with all other SSQ forms, including the full SSQ, the German SSQ15, the SSQ12, and the SSQ5. Construct validity was further assessed by testing statistical relationships between scores and audiometric factors, including pure-tone threshold averages (PTAs) and left/right PTA asymmetry. Receiver-operating characteristic analyses were used to compare the ability of different SSQ forms to discriminate between NH and HI (HI non hearing aid wearers and HI hearing aid wearers) individuals.
Results: Compared all other SSQ forms, including the full SSQ, the new short-form showed negligible cross-loading across the three main subscales and greater discriminatory power between NH and HI subjects (as indicated by a larger area under the receiver-operating characteristic curve), as well as between the main subscales (especially Speech and Qualities). Moreover, the new, 5-item Spatial subscale showed increased sensitivity to left/right PTA asymmetry. Very good internal consistency and homogeneity and high correlations with the SSQ were obtained for all short-forms.
Conclusions: While maintaining the three-factor structure of the full SSQ, and exceeding the latter in terms of construct validity and sensitivity to audiometric variables, the new 15-item SSQ affords a substantial reduction in the number of items and, thus, in test time. Based on overall scores, Speech subscores, or Spatial subscores, but not Qualities subscores, the 15-item SSQ appears to be more sensitive to differences in self-evaluated hearing abilities between NH and HI subjects than the full SSQ.
Cochlear Reflectance and Otoacoustic Emission Predictions of Hearing Loss
01-07-2019 – Neely, Stephen T.; Fultz, Sara E.; Kopun, Judy G.; Lenzen, Natalie M.; Rasetshwane, Daniel M.
Objectives: Cochlear reflectance (CR) is the cochlear contribution to ear-canal reflectance. CR is a type of otoacoustic emission (OAE) that is calculated as a transfer function between forward pressure and reflected pressure. The purpose of this study was to compare wideband CR to distortion-product (DP) OAEs in two ways: (1) in a clinical-screening paradigm where the task is to determine whether an ear is normal or has hearing loss and (2) in the prediction of audiometric thresholds. The goal of the study was to assess the clinical utility of CR.
Design: Data were collected from 32 normal-hearing and 124 hearing-impaired participants. A wideband noise stimulus presented at 3 stimulus levels (30, 40, 50 d
B sound pressure level) was used to elicit the CR. DPOAEs were elicited using primary tones spanning a wide frequency range (1 to 16 k
Hz). Predictions of auditory status (i.e., hearing-threshold category) and predictions of audiometric threshold were based on regression analysis. Test performance (identification of normal versus impaired hearing) was evaluated using clinical decision theory.
Results: When regressions were based only on physiological measurements near the audiometric frequency, the accuracy of CR predictions of auditory status and audiometric threshold was less than reported in previous studies using DPOAE measurements. CR predictions were improved when regressions were based on measurements obtained at many frequencies. CR predictions were further improved when regressions were performed on males and females separately.
Conclusions: Compared with CR measurements, DPOAE measurements have the advantages in a screening paradigm of better test performance and shorter test time. The full potential of CR measurements to predict audiometric thresholds may require further improvements in signal-processing methods to increase its signal to noise ratio. CR measurements have theoretical significance in revealing the number of cycles of delay at each frequency that is most sensitive to hearing loss.
How Do You Deal With Uncertainty? Cochlear Implant Users Differ in the Dynamics of Lexical Processing of Noncanonical Inputs
01-07-2019 – McMurray, Bob; Ellis, Tyler P.; Apfelbaum, Keith S.
Objectives: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty.
Design: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls.
Results: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes.
Conclusions: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations.
Biomarkers of Systemic Inflammation and Risk of Incident Hearing Loss
01-07-2019 – Gupta, Shruti; Curhan, Sharon G.; Curhan, Gary C.
Background: Chronic inflammation may lead to cochlear damage, and the only longitudinal study that examined biomarkers of systemic inflammation and risk of hearing loss found an association with a single biomarker in individuals <60 years of age. The purpose of our study was to determine whether plasma inflammatory markers are associated with incident hearing loss in two large prospective cohorts, Nurses’ Health Studies (NHS) I and II.
Methods: We examined the independent associations between plasma levels of markers of systemic inflammation (C-reactive protein CRP, interleukin-6 IL-6, and soluble tumor necrosis factor receptor 2 TNFR-2) and self-reported hearing loss. The participants in NHS I (n = 6194 women) were 42 to 69 years of age at the start of the analysis in 1990, while the participants in NHS II (n = 2885 women) were 32 to 53 years in 1995. After excluding women with self-reported hearing loss before the time of blood-draw, incident cases of hearing loss were defined as those women who reported hearing loss on questionnaires administered in 2012 in NHS I and 2009 or 2013 in NHS II. The primary outcome was hearing loss that was reported as moderate or worse in severity, pooled across the NHS I and NHS II cohorts. We also examined the pooled multivariable-adjusted hazard ratios for mild or worse hearing loss. Cox proportional hazards regression was used to adjust for potential confounders.
Results: At baseline, women ranged from 42 to 69 years of age in NHS I and 32 to 53 years of age in NHS II. Among the NHS I and II women with measured plasma CRP, there were 628 incident cases of moderate or worse hearing loss during 100,277 person-years of follow-up. There was no significant association between the plasma levels of any of the three inflammatory markers and incident moderate or worse hearing loss (multivariable-adjusted pooled p trend for CRP = 0.33; p trend IL-6 = 0.54; p trend TNFR-2 = 0.70). There was also no significant relation between inflammatory marker levels and mild or worse hearing loss. While there was no significant effect modification by age for CRP or IL-6 in NHS I, there was a statistically significant higher risk of moderate or worse hearing loss (p interaction = 0.02) as well as mild or worse hearing loss (p interaction = 0.004) in women ≥60 years of age who had higher plasma TNFR-2 levels.
Conclusions: Overall, there was no significant association between plasma markers of inflammation and risk of hearing loss.
Evaluation of a New Algorithm to Optimize Audibility in Cochlear Implant Recipients
01-07-2019 – Holden, Laura K.; Firszt, Jill B.; Reeder, Ruth M.; Dwyer, Noël Y.; Stein, Amy L.; Litvak, Leo M.
Objectives: A positive relation between audibility and speech understanding has been established for cochlear implant (CI) recipients. Sound field thresholds of 20 d
B HL across the frequency range provide CI users the opportunity to understand soft and very soft speech. However, programming the sound processor to attain good audibility can be time-consuming and difficult for some patients. To address these issues, Advanced Bionics (AB) developed the Soft
Voice algorithm designed to remove system noise and thereby improve audibility of soft speech. The present study aimed to evaluate the efficacy of Soft
Voice in optimizing AB CI recipients’ soft-speech perception.
Design: Two studies were conducted. Study 1 had two phases, 1A and 1B. Sixteen adult, AB CI recipients participated in Study 1A. Acute testing was performed in the unilateral CI condition using a Harmony processor programmed with participants’ everyday-use program (Everyday) and that same program but with Soft
Voice implemented. Speech recognition measures were administered at several presentation levels in quiet (35 to 60 d
B SPL) and in noise (60 d
B SPL). In Study 1B, 10 of the participants compared Everyday and Soft
Voice at home to obtain feedback regarding the use of Soft
Voice in various environments. During Study 2, soft-speech perception was acutely measured with Everyday and Soft
Voice for 10 participants using the Naida CI Q70 processor. Results with the Harmony (Study 1A) and Naida processors were compared. Additionally, Study 2 evaluated programming options for setting electrode threshold levels (T-levels or Ts) to improve the usability of Soft
Voice in daily life.
Results: Study 1A showed significantly higher scores with Soft
Voice than Everyday at soft presentation levels (35, 40, 45, and 50 d
B SPL) and no significant differences between programs at a conversational level (60 d
B SPL) in quiet or in noise. After take-home experience with Soft
Voice and Everyday (Study 1B), 5 of 10 participants reported preferring Soft
Voice over Everyday; however, 6 reported bothersome environmental sound when listening with Soft
Voice at home. Results of Study 2 indicated similar soft-speech perception between Harmony and Naida processors. Additionally, implementing Soft
Voice with Ts at the manufacturer’s default setting of 10% of Ms reduced reports of bothersome environmental sound during take-home experience; however, soft-speech perception was best with Soft
Voice when Ts were behaviorally set above 10% of Ms.
Conclusions: Results indicate that Soft
Voice may be a potential tool for optimizing AB users’ audibility and, in turn, soft-speech perception. To achieve optimal performance at soft levels and comfortable use in daily environments, setting Ts must be considered with Soft
Voice. Future research should examine program parameters that may benefit soft-speech perception when used in combination with Soft
Voice (e.g., increased input dynamic range).
Medical Referral Patterns and Etiologies for Children With Mild-to-Severe Hearing Loss
01-07-2019 – Judge, Paul D.; Jorgensen, Erik; Lopez-Vazquez, Monica; Roush, Patricia; Page, Thomas A.; Moeller, Mary Pat; Tomblin, J. Bruce; Holte, Lenore; Buchman, Craig
Objectives: To (1) identify the etiologies and risk factors of the patient cohort and determine the degree to which they reflected the incidence for children with hearing loss and (2) quantify practice management patterns in three catchment areas of the United States with available centers of excellence in pediatric hearing loss.
Design: Medical information for 307 children with bilateral, mild-to-severe hearing loss was examined retrospectively. Children were participants in the Outcomes of Children with Hearing Loss (OCHL) study, a 5-year longitudinal study that recruited subjects at three different sites. Children aged 6 months to 7 years at time of OCHL enrollment were participants in this study. Children with cochlear implants, children with severe or profound hearing loss, and children with significant cognitive or motor delays were excluded from the OCHL study and, by extension, from this analysis. Medical information was gathered using medical records and participant intake forms, the latter reflecting a caregiver’s report. A comparison group included 134 children with normal hearing. A Chi-square test on two-way tables was used to assess for differences in referral patterns by site for the children who are hard of hearing (CHH). Linear regression was performed on gestational age and birth weight as continuous variables. Risk factors were assessed using t tests. The alpha value was set at p < 0.05.
Results: Neonatal intensive care unit stay, mechanical ventilation, oxygen requirement, aminoglycoside exposure, and family history were correlated with hearing loss. For this study cohort, congenital cytomegalovirus, strep positivity, bacterial meningitis, extracorporeal membrane oxygenation, and loop diuretic exposure were not associated with hearing loss. Less than 50% of children underwent imaging, although 34.2% of those scanned had abnormalities identified. No single imaging modality was preferred. Differences in referral rates were apparent for neurology, radiology, genetics, and ophthalmology.
Conclusions: The OCHL cohort reflects known etiologies of CHH. Despite available guidelines, centers of excellence, and high-yield rates for imaging, the medical workup for children with hearing loss remains inconsistently implemented and widely variable. There remains limited awareness as to what constitutes appropriate medical assessment for CHH.
Masking Release for Speech in Modulated Maskers: Electrophysiological and Behavioral Measures
01-07-2019 – Tanner, A. Michelle; Spitzer, Emily R.; Hyzy, JP; Grose, John H.
Objectives: The purpose of this study was to obtain an electrophysiological analog of masking release using speech-evoked cortical potentials in steady and modulated maskers and to relate this masking release to behavioral measures for the same stimuli. The hypothesis was that the evoked potentials can be tracked to a lower stimulus level in a modulated masker than in a steady masker and that the magnitude of this electrophysiological masking release is of the same order as that of the behavioral masking release for the same stimuli.
Design: Cortical potentials evoked by an 80-ms /ba/ stimulus were measured in two steady maskers (30 and 65 d
B SPL), and in a masker that modulated between these two levels at a rate of 25 Hz. In each masker, a level series was undertaken to determine electrophysiological threshold. Behavioral detection thresholds were determined in the same maskers using an adaptive tracking procedure. Masking release was defined as the difference between signal thresholds measured in the steady 65-d
B SPL masker and the modulated masker. A total of 23 normal-hearing adults participated.
Results: Electrophysiological thresholds were uniformly elevated relative to behavioral thresholds by about 6.5 d
B. However, the magnitude of masking release was about 13.5 d
B for both measurement domains.
Conclusions: Electrophysiological measures of masking release using speech-evoked cortical auditory evoked potentials correspond closely to behavioral estimates for the same stimuli. This suggests that objective measures based on electrophysiological techniques can be used to reliably gauge aspects of temporal processing ability.
Development of the Cochlear Implant Quality of Life Item Bank
01-07-2019 – McRackan, Theodore R.; Hand, Brittany N.; Velozo, Craig A.; Dubno, Judy R.; Cochlear Implant Quality of Life Development Consortium
Objectives: Functional outcomes following cochlear implantation have traditionally been focused on word and sentence recognition, which, although important, do not capture the varied communication and other experiences of adult cochlear implant (CI) users. Although the inadequacies of speech recognition to quantify CI user benefits are widely acknowledged, rarely have adult CI user outcomes been comprehensively assessed beyond these conventional measures. An important limitation in addressing this knowledge gap is that patient-reported outcome measures have not been developed and validated in adult CI patients using rigorous scientific methods. The purpose of the present study is to build on our previous work and create an item bank that can be used to develop new patient-reported outcome measures that assess CI quality of life (QOL) in the adult CI population.
Design: An online questionnaire was made available to 500 adult CI users who represented the adult CI population and were recruited through a consortium of 20 CI centers in the United States. The questionnaire included the 101 question CIQOL item pool and additional questions related to demographics, hearing and CI history, and speech recognition scores. In accordance with the Patient-Reported Outcomes Measurement Information System, responses were psychometrically analyzed using confirmatory factor analysis and item response theory.
Results: Of the 500 questionnaires sent, 371 (74.2%) subjects completed the questionnaire. Subjects represented the full range of age, durations of CI use, speech recognition abilities, and listening modalities of the adult CI population; subjects were implanted with each of the three CI manufacturers’ devices. The initial item pool consisted of the following domain constructs: communication, emotional, entertainment, environment, independence, listening effort, and social. Through psychometric analysis, after removing locally dependent and misfitting items, all of the domains were found to have sound psychometric properties, with the exception of the independence domain. This resulted in a final CIQOL item bank of 81 items in 6 domains with good psychometric properties.
Conclusions: Our findings reveal that hypothesis-driven quantitative analyses result in a psychometrically sound CIQOL item bank, organized into unique domains comprised of independent items which measure the full ability range of the adult CI population. The final item bank will now be used to develop new instruments that evaluate and differentiate adult CIQOL across the patient ability spectrum.
Effect of Audibility and Suprathreshold Deficits on Speech Recognition for Listeners With Unilateral Hearing Loss
01-07-2019 – Bost, Tim J. M.; Versfeld, Niek J.; Goverts, S. Theo
Objectives: We examined the influence of impaired processing (audibility and suprathreshold processes) on speech recognition in cases of sensorineural hearing loss. The influence of differences in central, or top-down, processing was reduced by comparing the performance of both ears in participants with a unilateral hearing loss (UHL). We examined the influence of reduced audibility and suprathreshold deficits on speech recognition in quiet and in noise.
Design: We measured speech recognition in quiet and stationary speech-shaped noise with consonant–vowel–consonant words and digital triplets in groups of adults with UHL (n = 19), normal hearing (n = 15), and bilateral hearing loss (n = 9). By comparing the scores of the unaffected ear (UHL+) and the affected ear (UHL−) in the UHL group, we were able to isolate the influence of peripheral hearing loss from individual top-down factors such as cognition, linguistic skills, age, and sex.
Results: Audibility is a very strong predictor for speech recognition in quiet. Audibility has a less pronounced influence on speech recognition in noise. We found that, for the current sample of listeners, more speech information is required for UHL− than for UHL+ to achieve the same performance. For digit triplets at 80 d
BA, the speech recognition threshold in noise (SRT) for UHL− is on average 5.2 d
B signal to noise ratio (SNR) poorer than UHL+. Analysis using the speech intelligibility index (SII) indicates that on average 2.1 d
B SNR of this decrease can be attributed to suprathreshold deficits and 3.1 d
B SNR to audibility. Furthermore, scores for speech recognition in quiet and in noise for UHL+ are comparable to those of normal-hearing listeners.
Conclusions: Our data showed that suprathreshold deficits in addition to audibility play a considerable role in speech recognition in noise even at intensities well above hearing threshold.
Residual Cochlear Function in Adults and Children Receiving Cochlear Implants: Correlations With Speech Perception Outcomes: Erratum
Journal Article, Published Erratum
No abstract available
Compensatory and Serial Processing Models for Relating Electrophysiology, Speech Understanding, and Cognition
01-07-2019 – Billings, Curtis J.; McMillan, Garnett P.; Dille, Marilyn F.; Konrad-Martin, Dawn
Objectives: The objective of this study was to develop a framework for investigating the roles of neural coding and cognition in speech perception.
Design: N1 and P3 auditory evoked potentials, Quick
SIN speech understanding scores, and the Digit Symbol Coding cognitive test results were used to test the accuracy of either a compensatory processing model or serial processing model.
Results: The current dataset demonstrated that neither the compensatory nor the serial processing model were well supported. An additive processing model may best represent the relationships in these data.
Conclusions: With the outcome measures used in this study, it is apparent that an additive processing model, where exogenous neural coding and higher order cognition contribute independently, best describes the effects of neural coding and cognition on speech perception. Further testing with additional outcome measures and a larger number of subjects is needed to confirm and further clarify the relationships between these processing domains.
The Influence of Stimulus Repetition Rate on Tone-Evoked Post-Auricular Muscle Response (PAMR) Threshold
01-07-2019 – Zakaria, Mohd Normani; Abdullah, Rosninda; Nik Othman, Nik Adilah
Objectives: Post-auricular muscle response (PAMR) is a large myogenic potential that can be useful in estimating behavioral hearing thresholds when the recording protocol is optimal. The main aim of the present study was to determine the influence of stimulus repetition rate on PAMR threshold.
Design: In this repeated-measures study, 20 normally hearing adults aged between 18 and 30 years were recruited. Tone bursts (500, 1000, 2000, and 4000 Hz) were used to record PAMR thresholds at 3 different stimulus repetition rates (6.1/s, 11.1/s, and 17.1/s).
Results: Statistically higher PAMR thresholds were found for the faster stimulus rate (17.1/s) compared with the slower stimulus rate (6.1/s) (p < 0.05). For all stimulus rates and frequencies, significant correlations were found between PAMR and pure-tone audiometry thresholds (r = 0.62 to 0.82).
Conclusions: Even though the stimulus rate effect was significant at most of the tested frequencies, the differences in PAMR thresholds between the rates were small (<5 d
B). Nevertheless, based on the correlation results, we suggest the use of 11.1/s stimulus rate when recording PAMR thresholds.
Prevalence of and Risk Factors for Tinnitus and Tinnitus-Related Handicap in a College-Aged Population: Erratum
Journal Article, Published Erratum
No abstract available