Ear and Hearing 2020-09-01

Application of Big Data to Support Evidence-Based Public Health Policy Decision-Making for Hearing

Saunders, Gabrielle H.; Christensen, Jeppe H.; Gutenberg, Johanna; Pontoppidan, Niels H.; Smith, Andrew; Spanoudakis, George; Bamiou, Doris-Eva

Publicatie 01-09-2020


Ideally, public health policies are formulated from scientific data; however, policy-specific data are often unavailable. Big data can generate ecologically-valid, high-quality scientific evidence, and therefore has the potential to change how public health policies are formulated. Here, we discuss the use of big data for developing evidence-based hearing health policies, using data collected and analyzed with a research prototype of a data repository known as EVOTION (EVidence-based management of hearing impairments: public health pOlicy-making based on fusing big data analytics and simulaTION), to illustrate our points. Data in the repository consist of audiometric clinical data, prospective real-world data collected from hearing aids and an app, and responses to questionnaires collected for research purposes. To date, we have used the platform and a synthetic dataset to model the estimated risk of noise-induced hearing loss and have shown novel evidence of ways in which external factors influence hearing aid usage patterns. We contend that this research prototype data repository illustrates the value of using big data for policy-making by providing high-quality evidence that could be used to formulate and evaluate the impact of hearing health care policies.

Pubmed PDF Web

Prelinguistic Vocal Development in Children With Cochlear Implants: A Systematic Review

McDaniel, Jena; Gifford, René H.

Publicatie 01-09-2020


Objectives: This systematic review is designed to (a) describe measures used to quantify vocal development in pediatric cochlear implant (CI) users, (b) synthesize the evidence on prelinguistic vocal development in young children before and after cochlear implantation, and (c) analyze the application of the current evidence for evaluating change in vocal development before and after cochlear implantation for young children. Investigations of prelinguistic vocal development after cochlear implantation are only beginning to uncover the expected course of prelinguistic vocal development in children with CIs and what factors influence that course, which varies substantially across pediatric CI users. A deeper understanding of prelinguistic vocal development will improve professionals’ abilities to determine whether a child with a CI is exhibiting sufficient progress soon after implantation and to adjust intervention as needed.Design: We systematically searched PubMed, ProQuest, and CINAHL databases for primary reports of children who received a CI before 5 years 0 months of age that included at least one measure of nonword, nonvegetative vocalizations. We also completed supplementary searches.Results: Of the 1916 identified records, 59 met inclusion criteria. The included records included 1125 total participants, which came from 36 unique samples. Records included a median of 8 participants and rarely included children with disabilities other than hearing loss. Nearly all of the records met criteria for level 3 for quality of evidence on a scale of 1 (highest) to 4 (lowest). Records utilized a wide variety of vocalization measures but often incorporated features related to canonical babbling. The limited evidence from pediatric CI candidates before implantation suggests that they are likely to exhibit deficits in canonical syllables, a critical vocal development skill, and phonetic inventory size. Following cochlear implantation, multiple studies report similar patterns of growth, but faster rates producing canonical syllables in children with CIs than peers with comparable durations of robust hearing. However, caution is warranted because these demonstrated vocal development skills still occur at older chronological ages for children with CIs than chronological age peers with typical hearing.Conclusions: Despite including a relatively large number of records, the evidence in this review regarding changes in vocal development before and after cochlear implantation in young children remains limited. A deeper understanding of when prelinguistic skills are expected to develop, factors that explain deviation from that course, and the long-term impacts of variations in vocal prelinguistic development is needed. The diverse and dynamic nature of the relatively small population of pediatric CI users as well as relatively new vocal development measures present challenges for documenting and predicting vocal development in pediatric CI users before and after cochlear implantation. Synthesizing results across multiple institutions and completing rigorous studies with theoretically motivated, falsifiable research questions will address a number of challenges for understanding prelinguistic vocal development in children with CIs and its relations with other current and future skills. Clinical implications include the need to measure prelinguistic vocalizations regularly and systematically to inform intervention planning.

Pubmed PDF Web

The Impact of Family Environment on Language Development of Children With Cochlear Implants: A Systematic Review and Meta-Analysis

Holzinger, Daniel; Dall, Magdalena; Sanduvete-Chaves, Susana; Saldaña, David; Chacón-Moscoso, Salvador; Fellinger, Johannes

Publicatie 01-09-2020


Objectives: The authors conducted a systematic review of the literature and meta-analyses to assess the influence of family environment on language development in children with cochlear implants.Design: The Pubmed, excerpta medica dataBASE (EMBASE), Education Research Information Center, cumulative index to nursing and allied health literature (CINAHL), Healthcare Literature Information Network, PubPsych, and Social SciSearch databases were searched. The search strategy included terms describing family environment, child characteristics, and language development. Studies were included that (a) assessed distal family variables (such as parental income level, parental education, family size, and parental stress) with child language outcomes, and/or more proximal correlates that directly affect the child (such as family engagement and participation in intervention, parenting style, and more specifically, the quantity and quality of parental linguistic input) on child language; (b) included children implanted before the age of 5 years; (c) measured child language before the age of 21 years with standardized instruments; (d) were published between 1995 and February 2018; and (e) were published as peer-reviewed articles. The methodological quality was assessed with an adaptation of a previously validated checklist. Meta-analyses were conducted assuming a random-effects model.Results: A total of 22 study populations reported in 27 publications were included. Methodological quality was highly variable. Ten studies had a longitudinal design. Three meta-analyses on the correlations between family variables and child language development could be performed. A strong effect of the quality and quantity of parental linguistic input in the first 4½ years postimplantation on the child’s language was found, r = 0.564, p ≤ 0.001, 95% confidence interval (CI) = 0.449 to 0.660, accounting for 31.7% of the variance in child language outcomes. Results demonstrated high homogeneity, Q(3) = 1.823, p = 0.61, I2 = 0. Higher-level facilitative language techniques, such as parental expansions of the child’s utterances or the use of open-ended questions, predicted child language skills. Risk of publication bias was not detected. The results on the impact of family involvement/participation in intervention on child language development were more heterogeneous. The meta-analysis included mainly cross-sectional studies and identified low to moderate benefits, r = 0.380, p ≤ 0.052, 95% CI = −0.004 to 0.667, that almost attained significance level. Socioeconomic status, mainly operationalized by parental level of education, showed a positive correlation with child language development in most studies. The meta-analysis confirmed an overall low and nonsignificant average correlation coefficient, r = 0.117, p = 0.262, 95% CI = −0.087 to 0.312. A limitation of the study was the lack of some potentially relevant variables, such as multilingualism or family screen time.Conclusions: These data support the hypothesis that parental linguistic input during the first years after cochlear implantation strongly predicts later child language outcomes. Effects of parental involvement in intervention and parental education are comparatively weaker and more heterogeneous. These findings underscore the need for early-intervention programs for children with cochlear implants focusing on providing support to parents for them to increase their children’s exposure to high-quality conversation.

Pubmed PDF Web

Meta-Analysis on the Identification of Linguistic and Emotional Prosody in Cochlear Implant Users and Vocoder Simulations

Everhardt, Marita K.; Sarampalis, Anastasios; Coler, Matt; Baskent, Deniz; Lowie, Wander

Publicatie 01-09-2020


Objectives: This study quantitatively assesses how cochlear implants (CIs) and vocoder simulations of CIs influence the identification of linguistic and emotional prosody in nontonal languages. By means of meta-analysis, it was explored how accurately CI users and normal-hearing (NH) listeners of vocoder simulations (henceforth: simulation listeners) identify prosody compared with NH listeners of unprocessed speech (henceforth: NH listeners), whether this effect of electric hearing differs between CI users and simulation listeners, and whether the effect of electric hearing is influenced by the type of prosody that listeners identify or by the availability of specific cues in the speech signal.Design: Records were found by searching the PubMed Central, Web of Science, Scopus, Science Direct, and PsycINFO databases (January 2018) using the search terms “cochlear implant prosody” and “vocoder prosody.” Records (published in English) were included that reported results of experimental studies comparing CI users’ and/or simulation listeners’ identification of linguistic and/or emotional prosody in nontonal languages to that of NH listeners (all ages included). Studies that met the inclusion criteria were subjected to a multilevel random-effects meta-analysis.Results: Sixty-four studies reported in 28 records were included in the meta-analysis. The analysis indicated that CI users and simulation listeners were less accurate in correctly identifying linguistic and emotional prosody compared with NH listeners, that the identification of emotional prosody was more strongly compromised by the electric hearing speech signal than linguistic prosody was, and that the low quality of transmission of fundamental frequency (f0) through the electric hearing speech signal was the main cause of compromised prosody identification in CI users and simulation listeners. Moreover, results indicated that the accuracy with which CI users and simulation listeners identified linguistic and emotional prosody was comparable, suggesting that vocoder simulations with carefully selected parameters can provide a good estimate of how prosody may be identified by CI users.Conclusions: The meta-analysis revealed a robust negative effect of electric hearing, where CIs and vocoder simulations had a similar negative influence on the identification of linguistic and emotional prosody, which seemed mainly due to inadequate transmission of f0 cues through the degraded electric hearing speech signal of CIs and vocoder simulations.

Pubmed PDF Web

Effectiveness and Safety of Advanced Audiology-Led Triage in Pediatric Otolaryngology Services

Pokorny, Michelle A.; Wilson, Wayne J.; Whitfield, Bernard C. S.; Thorne, Peter R.

Publicatie 01-09-2020


Objectives: Expansion of the scopes of practice of allied health practitioners has the potential to improve the efficiency and cost-effectiveness of healthcare, given the identified shortages in medical personnel. Despite numerous examples in other allied health disciplines, this has yet to be applied to pediatric Audiology. This study aimed to investigate the effectiveness and safety of using audiologists with advanced training to independently triage children referred to otolaryngology (ORL) services, and compare the subsequent use of specialist resources, and postoperative grommet care to a standard medical ORL service.Design: One hundred twenty children consecutively referred to a large ORL outpatient service in Queensland, Australia, for middle ear and hearing concerns were prospectively allocated to either the ORL service or Advanced Audiology-led service. Demographic and clinical data were extracted from electronic medical records and compared between the two services. Clinical incidents and adverse events were recorded for the Advanced Audiology-led service.Results: Approximately half of all children referred to ORL for middle ear or hearing concerns were discharged without requiring any treatment, with the remaining half offered surgical treatment. The Advanced Audiology-led model increased the proportion of children assessed by ORL that proceeded to surgery from 57% to 82% compared with the standard medical ORL model. Children followed up by the audiologists after grommet insertion were more likely to be discharged independently and at the first postoperative review appointment compared with the standard medical ORL service. There were no reports of adverse events or long-term bilateral hearing loss after discharge by the Advanced Audiology-led service.Conclusions: These findings indicate that an Advanced Audiology-led service provides a safe and effective triaging model for the independent management of children not requiring treatment, and children requiring routine postoperative grommet review, and improves the effective use of specialist resource compared with the standard medical ORL service.

Pubmed PDF Web

Effect of Cochlear Implantation on Vestibular Evoked Myogenic Potentials and Wideband Acoustic Immittance

Merchant, Gabrielle R.; Schulz, Kyli M.; Patterson, Jessie N.; Fitzpatrick, Denis; Janky, Kristen L.

Publicatie 01-09-2020


Objectives: The objective of this study was to determine if absent air conduction stimuli vestibular evoked myogenic potential (VEMP) responses found in ears after cochlear implantation can be the result of alterations in peripheral auditory mechanics rather than vestibular loss. Peripheral mechanical changes were investigated by comparing the response rates of air and bone conduction VEMPs as well as by measuring and evaluating wideband acoustic immittance (WAI) responses in ears with cochlear implants and normal-hearing control ears. The hypothesis was that the presence of a cochlear implant can lead to an air-bone gap, causing absent air conduction stimuli VEMP responses, but present bone conduction vibration VEMP responses (indicating normal vestibular function), with changes in WAI as compared with ears with normal hearing. Further hypotheses were that subsets of ears with cochlear implants would (a) have present VEMP responses to both stimuli, indicating normal vestibular function and either normal or near-normal WAI, or (b) have absent VEMP responses to both stimuli, regardless of WAI, due to true vestibular loss.Design: Twenty-seven ears with cochlear implants (age range 7 to 31) and 10 ears with normal hearing (age range 7 to 31) were included in the study. All ears completed otoscopy, audiometric testing, 226 Hz tympanometry, WAI measures (absorbance), air conduction stimuli cervical and ocular VEMP testing through insert earphones, and bone conduction vibration cervical and ocular VEMP testing with a mini-shaker. Comparisons of VEMP responses to air and bone conduction stimuli, as well as absorbance responses between ears with normal hearing and ears with cochlear implants, were completed.Results: All ears with normal hearing demonstrated 100% present VEMP response rates for both stimuli. Ears with cochlear implants had higher response rates to bone conduction vibration compared with air conduction stimuli for both cervical and ocular VEMPs; however, this was only significant for ocular VEMPs. Ears with cochlear implants demonstrated reduced low-frequency absorbance (500 to 1200 Hz) as compared with ears with normal hearing. To further analyze absorbance, ears with cochlear implants were placed into subgroups based on their cervical and ocular VEMP response patterns. These groups were (1) present air conduction stimuli response, present bone conduction vibration response, (2) absent air conduction stimuli response, present bone conduction vibration response, and (3) absent air conduction stimuli response, absent bone conduction vibration response. For both cervical and ocular VEMPs, the group with absent air conduction stimuli responses and present bone conduction vibration responses demonstrated the largest decrease in low-frequency absorbance as compared with the ears with normal hearing.Conclusions: Bone conduction VEMP response rates were increased compared with air-conduction VEMP response rates in ears with cochlear implants. Ears with cochlear implants also demonstrate changes in low-frequency absorbance consistent with a stiffer system. This effect was largest for ears that had absent air conduction but present bone conduction VEMPs. These findings suggest that this group, in particular, has a mechanical change that could lead to an air-bone gap, thus, abolishing the air conduction VEMP response due to an alteration in mechanics and not a true vestibular loss. Clinical considerations include using bone conduction vibration VEMPs and WAI for preoperative and postoperative testing in patients undergoing cochlear implantation.

Pubmed PDF Web

Application of Rasch Analysis to the Evaluation of the Measurement Properties of the Hearing Handicap Inventory for the Elderly

Heffernan, Eithne; Weinstein, Barbara E.; Ferguson, Melanie A.

Publicatie 01-09-2020


Objectives: The aim of this research was to evaluate the measurement properties of the Hearing Handicap Inventory for the Elderly (HHIE). The HHIE is one of the most widely used patient-reported outcome measures in audiology. It was originally developed in the United States in the 1980s as a measure of the social and emotional impact of hearing loss in older adults. It contains 25 items that are accompanied by a 3-point response scale. To date, the measurement properties of the HHIE have primarily been assessed via traditional psychometric analysis techniques (e.g., Cronbach’s alpha and Principal Components Analysis). However, traditional techniques are now known to have several limitations in comparison to more modern approaches. Therefore, this research used a modern psychometric analysis technique, namely Rasch analysis, to evaluate the HHIE.Design: Rasch analysis was performed on HHIE data collected from 380 adults with hearing loss. The participants were principally recruited from the participant database of the National Institute for Health Research Nottingham Biomedical Research Centre in the United Kingdom. Additional participants were recruited from two UK audiology clinics and the online forum of a UK hearing loss charity. Rasch analysis was used to assess the measurement properties of the HHIE (i.e., fit to the Rasch model, unidimensionality, targeting, and person separation reliability) and its individual items (i.e., response dependency, fit, Differential Item Functioning, and threshold ordering).Results: The HHIE was found to have several strong measurement properties. Specifically, it was well-targeted and had high person separation reliability. However, it displayed poor fit to the Rasch model and was not unidimensional. The majority of the items were free of response dependency (i.e., redundancy) and were suited to the 3-point response scale. However, two items were found to be better suited to a dichotomous response scale. Furthermore, nine items were identified as being candidates for removal from the questionnaire, as they exhibited poor fit and/or Differential Item Functioning (i.e., item bias) associated with gender. The measurement properties of the HHIE could be improved by removing these items and adjusting the scores of the two items that require a dichotomous response scale. These amendments resulted in a 16-item version of the HHIE that had good fit to the Rasch model and that was unidimensional.Conclusions: It is vital to ensure that high-quality outcome measures are used in audiology research and practice. This study evaluated one of the foremost outcome measures in this field: the HHIE. The results demonstrated that the HHIE had several strong measurement properties. Amending the HHIE, such as by removing items exhibiting poor fit, could further enhance its quality. A unique aspect of this study was the application of Rasch analysis to the evaluation of the HHIE. It is recommended that future studies use modern techniques to develop and identify high-quality, hearing-specific outcome measures.

Pubmed PDF Web

Postoperative Intracochlear Electrocochleography in Pediatric Cochlear Implant Recipients: Association to Audiometric Thresholds and Auditory Performance

Attias, Joseph; Ulanovski, David; Hilly, Ohad; Greenstein, Tally; Sokolov, Merav; HabibAllah, Suhail; Mormer, Hen; Raveh, Eyal

Publicatie 01-09-2020


Objectives: The aim of this study was to compare intracochlear-recorded cochlear microphonics (CM) responses to behavioral audiometry thresholds in young children, with congenital hearing loss, 2 to 5 years after cochlear implantation early in life. In addition, differences in speech and auditory outcomes were assessed among children with and without residual hearing.Design: The study was conducted at a tertiary, university-affiliated, pediatric medical center. CM responses by intracochlear electrocochleography technique were recorded from 102 implanted ears of 60 children and those responses correlated to behavioral audiometry thresholds at 0.125 to 2 kHz frequencies. All children had received Advanced Bionics cochlear implant with High Focus J1 or MidScala electrodes, along with extensive auditory rehabilitation before and after implantation, including the use of conventional hearing aids. Speech Spatial and Hearing Qualities, Category of Auditory Performance scale, and educational settings information were obtained for each participant. Those cochlear implantation (CI) outcomes were compared between children with or without residual CM responses.Results: Two distinctive CM responses patterns were found among the implanted children. Of all ears diagnosed with cochlear hearing loss (n = 88), only in 29 ears, clear CM responses were obtained. In all other ears, no CM responses were obtained at the maximum output levels. The CM responses were highly correlated with coefficients ranging from 0.7 to 0.83 for the audiometric behavioral thresholds at 0.125 to 2 kHz frequency range. Of all ears diagnosed with auditory neuropathy syndrome disorder (n = 14), eight ears had residual hearing and recordable CM postimplantation. The other six ears showed no recordable CM responses at maximum output levels for all tested frequencies. Those showed supposedly better CM responses than the behavioral audiometry threshold, however with poor correlations with tested frequency thresholds. Children with residual hearing showed significantly better auditory outcomes with CI, compared with those without residual hearing.Conclusions: In children with congenital cochlear hearing loss, the objective CM intracochlear responses can reliably predict the residual audiometric threshold. However, in children with auditory neuropathy syndrome disorder, the CM thresholds did not match the behavioral audiometric responses. Postoperatively, children with recordable CM responses, showing preserved residual hearing, demonstrated better CI outcomes.

Pubmed PDF Web

Tracking Cognitive Spare Capacity During Speech Perception With EEG/ERP: Effects of Cognitive Load and Sentence Predictability

Hunter, Cynthia R.

Publicatie 01-09-2020


Objectives: Listening to speech in adverse listening conditions is effortful. Objective assessment of cognitive spare capacity during listening can serve as an index of the effort needed to understand speech. Cognitive spare capacity is influenced both by signal-driven demands posed by listening conditions and top-down demands intrinsic to spoken language processing, such as memory use and semantic processing. Previous research indicates that electrophysiological responses, particularly alpha oscillatory power, may index listening effort. However, it is not known how these indices respond to memory and semantic processing demands during spoken language processing in adverse listening conditions. The aim of the present study was twofold: first, to assess the impact of memory demands on electrophysiological responses during recognition of degraded, spoken sentences, and second, to examine whether predictable sentence contexts increase or decrease cognitive spare capacity during listening.Design: Cognitive demand was varied in a memory load task in which young adult participants (n = 20) viewed either low-load (one digit) or high-load (seven digits) sequences of digits, then listened to noise-vocoded spoken sentences that were either predictable or unpredictable, and then reported the final word of the sentence and the digits. Alpha oscillations in the frequency domain and event-related potentials in the time domain of the electrophysiological data were analyzed, as was behavioral accuracy for both words and digits.Results: Measured during sentence processing, event-related desynchronization of alpha power was greater (more negative) under high load than low load and was also greater for unpredictable than predictable sentences. A complementary pattern was observed for the P300/late positive complex (LPC) to sentence-final words, such that P300/LPC amplitude was reduced under high load compared with low load and for unpredictable compared with predictable sentences. Both words and digits were identified more quickly and accurately on trials in which spoken sentences were predictable.Conclusions: Results indicate that during a sentence-recognition task, both cognitive load and sentence predictability modulate electrophysiological indices of cognitive spare capacity, namely alpha oscillatory power and P300/LPC amplitude. Both electrophysiological and behavioral results indicate that a predictive sentence context reduces cognitive demands during listening. Findings contribute to a growing literature on objective measures of cognitive demand during listening and indicate predictable sentence context as a top-down factor that can support ease of listening.

Pubmed PDF Web

Speech Understanding With Bimodal Stimulation Is Determined by Monaural Signal to Noise Ratios: No Binaural Cue Processing Involved

Dieudonné, Benjamin; Francart, Tom

Publicatie 01-09-2020


Objectives: To investigate the mechanisms behind binaural and spatial effects in speech understanding for bimodal cochlear implant listeners. In particular, to test our hypothesis that their speech understanding can be characterized by means of monaural signal to noise ratios, rather than complex binaural cue processing such as binaural unmasking.Design: We applied a semantic framework to characterize binaural and spatial effects in speech understanding on an extensive selection of the literature on bimodal listeners. In addition, we performed two experiments in which we measured speech understanding in different masker types (1) using head-related transfer functions, and (2) while adapting the broadband signal to noise ratios in both ears independently. We simulated bimodal hearing with a vocoder in one ear (the cochlear implant side) and a low-pass filter in the other ear (the hearing aid side). By design, the cochlear implant side was the main contributor to speech understanding in our simulation.Results: We found that spatial release from masking can be explained as a simple trade-off between a monaural change in signal to noise at the cochlear implant side (quantified as the head shadow effect) and an opposite change in signal to noise at the hearing aid side (quantified as a change in bimodal benefit). In simulated bimodal listeners, we found that for every 1 dB increase in signal to noise ratio at the hearing aid side, the bimodal benefit improved by approximately 0.4 dB in signal to noise ratio.Conclusions: Although complex binaural cue processing is often implicated when discussing speech intelligibility in adverse listening conditions, performance can simply be explained based on monaural signal to noise ratios for bimodal listeners.

Pubmed PDF Web

Listening Difficulties of Children With Cochlear Implants in Mainstream Secondary Education

Krijger, Stefanie; Coene, Martine; Govaerts, Paul J.; Dhooge, Ingeborg

Publicatie 01-09-2020


Objectives: Previous research has shown that children with cochlear implants (CIs) encounter more communication difficulties than their normal-hearing (NH) peers in kindergarten and elementary schools. Yet, little is known about the potential listening difficulties that children with CIs may experience during secondary education. The aim of this study was to investigate the listening difficulties of children with a CI in mainstream secondary education and to compare these results to the difficulties of their NH peers and the difficulties observed by their teachers.Design: The Dutch version of the Listening Inventory for Education Revised (LIFE-R) was administered to 19 children (mean age = 13 years 9 months; SD = 9 months) who received a CI early in life, to their NH classmates (n = 239), and to their teachers (n = 18). All participants were enrolled in mainstream secondary education in Flanders (first to fourth grades). The Listening Inventory for Secondary Education consists of 15 typical listening situations as experienced by students (LIFEstudent) during class activities (LIFEclass) and during social activities at school (LIFEsocial). The teachers completed a separate version of the Listening Inventory for Secondary Education (LIFEteacher) and Screening Instrument for Targeting Educational Risk.Results: Participants with CIs reported significantly more listening difficulties than their NH peers. A regression model estimated that 75% of the participants with CIs were at risk of experiencing listening difficulties. The chances of experiencing listening difficulties were significantly higher in participants with CIs for 7 out of 15 listening situations. The 3 listening situations that had the highest chance of resulting in listening difficulties were (1) listening during group work, (2) listening to multimedia, and (3) listening in large-sized classrooms. Results of the teacher’s questionnaires (LIFEteacher and Screening Instrument for Targeting Educational Risk) did not show a similar significant difference in listening difficulties between participants with a CI and their NH peers. According to teachers, NH participants even obtained significantly lower scores for staying on task and for participation in class than participants with a CI.Conclusions: Although children with a CI seemingly fit in well in mainstream schools, they still experience significantly more listening difficulties than their NH peers. Low signal to noise ratios (SNRs), distortions of the speech signal (multimedia, reverberation), distance, lack of visual support, and directivity effects of the microphones were identified as difficulties for children with a CI in the classroom. As teachers may not always notice these listening difficulties, a list of practical recommendations was provided in this study, to raise awareness among teachers and to minimize the difficulties.

Pubmed PDF Web

Cortical fNIRS Responses Can Be Better Explained by Loudness Percept than Sound Intensity

Weder, Stefan; Shoushtarian, Mehrnaz; Olivares, Virginia; Zhou, Xin; Innes-Brown, Hamish; McKay, Colette

Publicatie 01-09-2020


Objectives: Functional near-infrared spectroscopy (fNIRS) is a brain imaging technique particularly suitable for hearing studies. However, the nature of fNIRS responses to auditory stimuli presented at different stimulus intensities is not well understood. In this study, we investigated whether fNIRS response amplitude was better predicted by stimulus properties (intensity) or individually perceived attributes (loudness).Design: Twenty-two young adults were included in this experimental study. Four different stimulus intensities of a broadband noise were used as stimuli. First, loudness estimates for each stimulus intensity were measured for each participant. Then, the 4 stimulation intensities were presented in counterbalanced order while recording hemoglobin saturation changes from cortical auditory brain areas. The fNIRS response was analyzed in a general linear model design, using 3 different regressors: a non-modulated, an intensity-modulated, and a loudness-modulated regressor.Results: Higher intensity stimuli resulted in higher amplitude fNIRS responses. The relationship between stimulus intensity and fNIRS response amplitude was better explained using a regressor based on individually estimated loudness estimates compared with a regressor modulated by stimulus intensity alone.Conclusions: Brain activation in response to different stimulus intensities is more reliant upon individual loudness sensation than physical stimulus properties. Therefore, in measurements using different auditory stimulus intensities or subjective hearing parameters, loudness estimates should be examined when interpreting results.

Pubmed PDF Web

Detection of Extracochlear Electrodes in Cochlear Implants with Electric Field Imaging/Transimpedance Measurements:: A Human Cadaver Study

de Rijk, Simone R.; Tam, Yu C.; Carlyon, Robert P.; Bance, Manohar L.

Publicatie 01-09-2020


Objectives: Extracochlear electrodes in cochlear implants (CI), defined as individual electrodes on the electrode array located outside of the cochlea, are not a rare phenomenon. The presence of extracochlear electrodes frequently goes unnoticed and could result in them being assigned stimulation frequencies that are either not delivered to, or stimulating neurons that overlap with intracochlear electrodes, potentially reducing performance. The current gold-standard for detection of extracochlear electrodes is computed tomography (CT), which is time-intensive, costly and involves radiation. It is hypothesized that a collection of Stimulation-Current-Induced Non-Stimulating Electrode Voltage recordings (SCINSEVs), commonly referred to as “transimpedance measurements (TIMs)” or electric field imaging (EFI), could be utilized to detect extracochlear electrodes even when contact impedances are low. An automated analysis tool is introduced for detection and quantification of extracochlear electrodes.Design: Eight fresh-frozen human cadaveric heads were implanted with the Advanced Bionics HiRes90K with a HiFocus 1J lateral-wall electrode. The cochlea was flushed with 1.0% saline through the lateral semicircular canal. Contact impedances and SCINSEVs were recorded for complete insertion and for 1 to 5 extracochlear electrodes. Measured conditions included: air in the middle ear (to simulate electrodes situated in the middle ear), 1.0% saline in the middle ear (to simulate intraoperative conditions with saline or blood in the middle ear), and soft tissue (temporal muscle) wrapped around the extracochlear electrodes (to simulate postoperative soft-tissue encapsulation of the electrodes). Intraoperative SCINSEVs from patients were collected, for clinical purposes during slow insertion of the electrode array, as well as from a patient postoperatively with known extracochlear electrodes.Results: Full insertion of the cochlear implant in the fresh-frozen human cadaveric heads with a flushed cochlea resulted in contact impedances in the range of 6.06 ± 2.99 kΩ (mean ± 2SD). Contact impedances were high when the extracochlear electrodes were located in air, but remained similar to intracochlear contact impedances when in saline or soft tissue. SCINSEVs showed a change in shape for the extracochlear electrodes in air, saline, and soft tissue. The automated analysis tool showed a specificity and sensitivity of 100% for detection of two or more extracochlear electrodes in saline and soft tissue. The quantification of two or more extracochlear electrodes was correct for 84% and 81% of the saline and soft tissue measurements, respectively.Conclusions: Our analysis of SCINSEVs (specifically the EFIs from this manufacturer) shows good potential as a detection tool for extracochlear electrodes, even when contact impedances remain similar to intracochlear values. SCINSEVs could potentially replace CT in the initial screening for extracochlear electrodes. Detecting migration of the electrode array during the final stages of surgery could potentially prevent re-insertion surgery for some CI users. The automated detection tool could assist in detection and quantification of two or more extracochlear electrodes.

Pubmed PDF Web

Birth Weight and Adult-Onset Hearing Loss

Gupta, Shruti; Wang, Molin; Hong, Biling; Curhan, Sharon G.; Curhan, Gary C.

Publicatie 01-09-2020


Objectives: Among low-birth-weight infants, exposure to stress or undernutrition in utero may adversely affect cochlear development. As cochlear reserve declines, the risk of hearing loss may increase with age. While low birth weight is associated with a higher risk of neonatal hearing loss, our objective was to examine whether birth weight was associated with adult-onset, self-reported hearing loss in the Nurses’ Health Studies (NHS) I and II (n = 113,130).Design: We used Cox proportional hazards regression to prospectively examine whether birth weight, as well as gestational age at birth, is associated with adult-onset hearing loss. Participants reported their birth weight in 1992 in NHS I and 1991 in NHS II. Mothers of NHS II participants reported gestational age at birth in a substudy (n = 28,590). The primary outcome was adult-onset, self-reported moderate or greater hearing loss, based on questionnaires administered in 2012/2016 in NHS I and 2009/2013 in NHS II.Results: Our results suggested a higher risk of hearing loss among those with birth weight <5.5 lbs compared with birth weight 7 to <8.5 lbs (pooled multivariable-adjusted hazard ratio 1.14, 95% confidence interval = 1.04–1.23; p trend = 0.01). Additionally, participants with gestational age at birth ≥42 weeks had a higher risk of hearing loss, compared with gestational age 38 to <42 weeks (multivariable-adjusted hazard ratio 1.33, 95% confidence interval = 1.06–1.65).Conclusions: Birth weight <5.5 lbs was independently associated with higher risk of self-reported, adult-onset hearing loss. In addition, gestational age at birth ≥42 weeks was also associated with higher risk.

Pubmed PDF Web

Biopsychosocial Classification of Hearing Health Seeking in Adults Aged Over 50 Years in England

Sawyer, Chelsea S.; Armitage, Christopher J.; Munro, Kevin J.; Singh, Gurjit; Dawes, Piers D.

Publicatie 01-09-2020


Objectives: Approximately 10 to 35% of people with a hearing impairment own a hearing aid. The present study aims to identify barriers to obtaining a hearing aid and inform future interventions by examining the biopsychosocial characteristics of adults aged 50+ according to 7 categories: (i) Did not report hearing difficulties, (ii) Reported hearing difficulties, (iii) Told a healthcare professional about experiencing hearing difficulties, (iv) Referred for a hearing assessment, (v) Offered a hearing aid, (vi) Accepted a hearing aid, and (vii) Reported using a hearing aid regularly.Design: The research was conducted using the English Longitudinal Study of Aging wave 7 with data obtained from 9666 adults living in England from June 2014 to May 2015. Cross-sectional data were obtained from a subset of 2845 participants aged 50 to 89 years of age with a probable hearing impairment measured by hearing screening (indicating a hearing threshold of >20 dB HL at 1 kHz or >35 dB HL at 3 kHz in the better ear). Classification according to hearing health-seeking category was via participants’ self-report. Participants in each category were compared with people in all subsequent categories to examine the associations between each category and biopsychosocial correlates (sex, age, ethnicity, educational level, wealth, audiometric hearing level, self-reported health status, cognitive performance, attitudes to aging, living alone, and engagement in social activities) using multiple logistic regression.Results: The proportions of individuals (N = 2845) in categories i to vii were 40.0% (n = 1139), 14.0% (n = 396), 4.5% (n = 129), 4.0% (n = 114), 1.2% (n = 34), 7.7% (n = 220), and 28.6% (n = 813), respectively. Severity of hearing impairment was the only factor predictive of all the categories of hearing health-seeking that could be modeled. Other correlates predictive of at least one category of hearing health-seeking included sex, age, self-reported heath, participation in social activities, and cognitive function.Conclusions: For the first time, it was shown that 40.0% of people with an audiometrically identified probable hearing impairment did not report hearing difficulties. Each of the five categories of hearing health-seeking that could be modeled had different drivers and consequently, interventions likely should vary depending on the category of hearing health-seeking.

Pubmed PDF Web

Spectral-Temporal Trade-Off in Vocoded Sentence Recognition: Effects of Age, Hearing Thresholds, and Working Memory

Shader, Maureen J.; Yancey, Calli M.; Gordon-Salant, Sandra; Goupell, Matthew J.

Publicatie 01-09-2020


Objectives: Cochlear implant (CI) signal processing degrades the spectral components of speech. This requires CI users to rely primarily on temporal cues, specifically, amplitude modulations within the temporal envelope, to recognize speech. Auditory temporal processing ability for envelope modulations worsens with advancing age, which may put older CI users at a disadvantage compared with younger users. To evaluate how potential age-related limitations for processing temporal envelope modulations impact spectrally degraded sentence recognition, noise-vocoded sentences were presented to younger and older normal-hearing listeners in quiet. Envelope modulation rates were varied from 10 to 500 Hz by adjusting the low-pass filter cutoff frequency (LPF). The goal of this study was to evaluate if age impacts recognition of noise-vocoded speech and if this age-related limitation existed for a specific range of envelope modulation rates.Design: Noise-vocoded sentence recognition in quiet was measured as a function of number of spectral channels (4, 6, 8, and 12 channels) and LPF (10, 20, 50, 75, 150, 375, and 500 Hz) in 15 younger normal-hearing listeners and 15 older near-normal-hearing listeners. Hearing thresholds and working memory were assessed to determine the extent to which these factors were related to recognition of noise-vocoded sentences.Results: Younger listeners achieved significantly higher sentence recognition scores than older listeners overall. Performance improved in both groups as the number of spectral channels and LPF increased. As the number of spectral channels increased, the differences in sentence recognition scores between groups decreased. A spectral-temporal trade-off was observed in both groups in which performance in the 8- and 12-channel conditions plateaued with lower-frequency amplitude modulations compared with the 4- and 6-channel conditions. There was no interaction between age group and LPF, suggesting that both groups obtained similar improvements in performance with increasing LPF. The lack of an interaction between age and LPF may be due to the nature of the task of recognizing sentences in quiet. Audiometric thresholds were the only significant predictor of vocoded sentence recognition. Although performance on the working memory task declined with advancing age, working memory scores did not predict sentence recognition.Conclusions: Younger listeners outperformed older listeners for recognizing noise-vocoded sentences in quiet. The negative impact of age was reduced when ample spectral information was available. Age-related limitations for recognizing vocoded sentences were not affected by the temporal envelope modulation rate of the signal, but instead, appear to be related to a generalized task limitation or to reduced audibility of the signal.

Pubmed PDF Web

Recognition of Accented Speech by Cochlear-Implant Listeners: Benefit of Audiovisual Cues

Waddington, Emily; Jaekel, Brittany N.; Tinnemore, Anna R.; Gordon-Salant, Sandra; Goupell, Matthew J.

Publicatie 01-09-2020


Objectives: When auditory and visual speech information are presented together, listeners obtain an audiovisual (AV) benefit or a speech understanding improvement compared with auditory-only (AO) or visual-only (VO) presentations. Cochlear-implant (CI) listeners, who receive degraded speech input and therefore understand speech using primarily temporal information, seem to readily use visual cues and can achieve a larger AV benefit than normal-hearing (NH) listeners. It is unclear, however, if the AV benefit remains relatively large for CI listeners when trying to understand foreign-accented speech when compared with unaccented speech. Accented speech can introduce changes to temporal auditory cues and visual cues, which could decrease the usefulness of AV information. Furthermore, we sought to determine if the AV benefit was relatively larger in CI compared with NH listeners for both unaccented and accented speech.Design: AV benefit was investigated for unaccented and Spanish-accented speech by presenting English sentences in AO, VO, and AV conditions to 15 CI and 15 age- and performance-matched NH listeners. Performance matching between NH and CI listeners was achieved by varying the number of channels of a noise vocoder for the NH listeners. Because of the differences in age and hearing history of the CI listeners, the effects of listener-related variables on speech understanding performance and AV benefit were also examined.Results: AV benefit was observed for both unaccented and accented conditions and for both CI and NH listeners. The two groups showed similar performance for the AO and AV conditions, and the normalized AV benefit was relatively smaller for the accented than the unaccented conditions. In the CI listeners, older age was associated with significantly poorer performance with the accented speaker compared with the unaccented speaker. The negative impact of age was somewhat reduced by a significant improvement in performance with access to AV information.Conclusions: When auditory speech information is degraded by CI sound processing, visual cues can be used to improve speech understanding, even in the presence of a Spanish accent. The AV benefit of the CI listeners closely matched that of the NH listeners presented with vocoded speech, which was unexpected given that CI listeners appear to rely more on visual information to communicate. This result is perhaps due to the one-to-one age and performance matching of the listeners. While aging decreased CI listener performance with the accented speaker, access to visual cues boosted performance and could partially overcome the age-related speech understanding deficits for the older CI listeners.

Pubmed PDF Web

Loudness Perception and Dynamic Range Depending on Interphase Gaps of Biphasic Pulses in Cochlear Implants

Pieper, Sabrina H.; Brill, Stefan; Bahmer, Andreas

Publicatie 01-09-2020


Objectives: The human auditory nerve can be electrically stimulated by cochlear implants (CIs) with pulse trains consisting of biphasic pulses with small interphase gaps (IPGs). In animal experiments, lower electrically evoked compound action potential (ECAP) thresholds in implanted animals were found for increasing IPGs (2.1, 10, 20, 30 μs). ECAP thresholds may correlate with loudness thresholds. Therefore, in this study, the IPG effect on loudness and dynamic range was investigated in nine CI subjects.Design: A loudness-matching procedure was designed with three different IPGs (2.1, 10, 30 μs) at three different pulse rates (200, 600, 1000 pps). An adaptive loudness-balancing test was performed at the 50% stimulus amplitude level of the dynamic range and most comfortable loudness level (MCL).Results: Increasing the IPG or increasing the pulse rate led to a significant decrease in stimulus amplitude for 50% level and MCL in the adaptive test. Because the stimulus amplitudes for 50% level and MCL decreased in a different manner, the calculated upper dynamic range between MCL and 50% level significantly decreased for increasing IPG between 0.24 and 0.38 dB. This decrease in the upper dynamic range was observed for all pulse rates.Conclusions: It is possible to reduce the stimulus amplitude level for the same loudness impression using larger IPGs in CIs; however, larger IPGs decrease the dynamic range. These findings could help during the fitting process of CIs to find the balance between saving battery and a proper dynamic range.

Pubmed PDF Web

Effectiveness of Phantom Stimulation in Shifting the Pitch Percept in Cochlear Implant Users

de Jong, Monique A. M.; Briaire, Jeroen J.; Biesheuvel, Jan Dirk; Snel-Bongers, Jorien; Böhringer, Stefan; Timp, Guy R. F. M.; Frijns, Johan H. M.

Publicatie 01-09-2020


Objectives: Phantom electrode stimulation was developed for cochlear implant (CI) systems to provide a lower pitch percept by stimulating more apical regions of the cochlea, without inserting the electrode array deeper into the cochlea. Phantom stimulation involves simultaneously stimulating a primary and a compensating electrode with opposite polarity, thereby shifting the electrical field toward the apex and eliciting a lower pitch percept. The current study compared the effect sizes (in shifts of place of excitation) of multiple phantom configurations by matching the perceived pitch with phantom stimulation to that perceived with monopolar stimulation. Additionally, the effects of electrode location, type of electrode array, and stimulus level on the perceived pitch were investigated.Design: Fifteen adult advanced bionics CI users participated in this study, which included four experiments to eventually measure the shifts in place of excitation with five different phantom configurations. The proportions of current delivered to the compensating electrode, expressed as σ, were 0.5, 0.6, 0.7, and 0.8 for the symmetrical biphasic pulses (SBC0.5, SBC0.6, SBC0.7, and SBC0.8) and 0.75 for the pseudomonophasic pulse shape (PSA0.75). A pitch discrimination experiment was first completed to determine which basal and apical electrode contacts should be used for the subsequent experiments. An extensive loudness balancing experiment followed where both the threshold level (T-level) and most comfortable level (M-level) were determined to enable testing at multiple levels of the dynamic range. A pitch matching experiment was then performed to estimate the shift in place of excitation at the chosen electrode contacts. These rough shifts were then used in the subsequent experiment, where the shifts in place of excitation were determined more accurately.Results: Reliable data were obtained from 20 electrode contacts. The average shifts were 0.39, 0.53, 0.64, 0.76, and 0.53 electrode contacts toward the apex for SBC0.5, SBC0.6, SBC0.7, SBC0.8, and PSA0.75, respectively. When only the best configurations per electrode contact were included, the average shift in place of excitation was 0.92 electrode contacts (range: 0.25 to 2.0). While PSA0.75 leads to equal results as the SBC configurations in the apex, it did not result in a significant shift at the base. The shift in place of excitation was significantly larger at the apex and with lateral wall electrode contacts. The stimulus level did not affect the shift.Conclusions: Phantom stimulation results in significant shifts in place of excitation, especially at the apical part of the electrode array. The phantom configuration that leads to the largest shift in place of excitation differs between subjects. Therefore, the settings of the phantom electrode should be individualized so that the phantom stimulation is optimized for each CI user. The real added value to the sound quality needs to be established in a take-home trial.

Pubmed PDF Web

Prediction of Individual Cochlear Implant Recipient Speech Perception With the Output Signal to Noise Ratio Metric

Watkins, Greg D.; Swanson, Brett A.; Suaning, Gregg J.

Publicatie 01-09-2020


Objectives: A cochlear implant (CI) implements a variety of sound processing algorithms that seek to improve speech intelligibility. Typically, only a small number of parameter combinations are evaluated with recipients but the optimal configuration may differ for individuals. The present study evaluates a novel methodology which uses the output signal to noise ratio (OSNR) to predict complete psychometric functions that relate speech recognition to signal to noise ratio for individual CI recipients.Design: Speech scores from sentence-in-noise tests in a “reference” condition were mapped to OSNR and a psychometric function was fitted. The reference variability was defined as the root mean square error between the reference scores and the fitted curve. To predict individual scores in a different condition, OSNRs in that condition were calculated and the corresponding scores were read from the reference psychometric function. In a retrospective experiment, scores were predicted for each condition and subject in three existing data sets of sentence scores. The prediction error was defined as the root mean square error between observed and predicted scores. In data set 1, sentences were mixed with 20 talker babble or speech weighted noise and presented at 65 dB sound pressure level (SPL). An adaptive test procedure was used. Sound processing was advanced combinatorial encoding (ACE, Cochlear Limited) and ACE with ideal binary mask processing, with five different threshold settings. In data set 2, sentences were mixed with speech weighted noise, street-side city noise or cocktail party noise and presented at 65 dB SPL. An adaptive test procedure was used. Sound processing was ACE and ACE with two different noise reduction schemes. In data set 3, sentences were mixed with four-talker babble at two input SNRs and presented at levels of 55–89 dB SPL. Sound processing utilised three different automatic gain control configurations.Results: For data set 1, the median of individual prediction errors across all subjects, noise types and conditions, was 12% points, slightly better than the reference variability. The OSNR prediction method was inaccurate for the specific condition with a gain threshold of +10 dB. For data set 2, the median of individual prediction errors was 17% points and the reference variability was 11% points. For data set 3, the median prediction error was 9% points and the reference variability was 7% points. A Monte Carlo simulation found that the OSNR prediction method, which used reference scores and OSNR to predict individual scores in other conditions, was significantly more accurate (p < 0.01) than simply using reference scores as predictors.Conclusions: The results supported the hypothesis that the OSNR prediction method could accurately predict individual recipient scores for a range of algorithms and noise types, for all but one condition. The medians of the individual prediction errors for each data set were accurate within 6% points of the reference variability and compared favourably with prediction methodologies in other recent studies. Overall, the novel OSNR-based prediction method shows promise as a tool to assist researchers and clinicians in the development or fitting of CI sound processors.

Pubmed PDF Web

Effects of Directional Microphone and Noise Reduction on Subcortical and Cortical Auditory-Evoked Potentials in Older Listeners With Hearing Loss

Slugocki, Christopher; Kuk, Francis; Korhonen, Petri

Publicatie 01-09-2020


Objectives: Understanding how signal processing influences neural activity in the brain with hearing loss is relevant to the design and evaluation of features intended to alleviate speech-in-noise deficits faced by many hearing aid wearers. Here, we examine whether hearing aid processing schemes that are designed to improve speech-in-noise intelligibility (i.e., directional microphone and noise reduction) also improve electrophysiological indices of speech processing in older listeners with hearing loss.Design: The study followed a double-blind within-subjects design. A sample of 19 older adults (8 females; mean age = 73.6 years, range = 56–86 years; 17 experienced hearing aid users) with a moderate to severe sensorineural hearing impairment participated in the experiment. Auditory-evoked potentials associated with processing in cortex (P1-N1-P2) and subcortex (frequency-following response) were measured over the course of two 2-hour visits. Listeners were presented with sequences of the consonant-vowel syllable /da/ in continuous speech-shaped noise at signal to noise ratios (SNRs) of 0, +5, and +10 dB. Speech and noise stimuli were pre-recorded using a Knowles Electronics Manikin for Acoustic Research (KEMAR) head and torso simulator outfitted with hearing aids programmed for each listener’s loss. The study aid programs were set according to 4 conditions: (1) omnidirectional microphone, (2) omnidirectional microphone with noise reduction, (3) directional microphone, and (4) directional microphone with noise reduction. For each hearing aid condition, speech was presented from a loudspeaker located at 1 m directly in front of KEMAR (i.e., 0° in the azimuth) at 75 dB SPL and noise was presented from a matching loudspeaker located at 1 m directly behind KEMAR (i.e., 180° in the azimuth). Recorded stimulus sequences were normalized for speech level across conditions and presented to listeners over electromagnetically shielded ER-2 ear-insert transducers. Presentation levels were calibrated to match the output of listeners’ study aids.Results: Cortical components from listeners with hearing loss were enhanced with improving SNR and with use of a directional microphone and noise reduction. On the other hand, subcortical components did not show sensitivity to SNR or microphone mode but did show enhanced encoding of temporal fine structure of speech for conditions where noise reduction was enabled.Conclusions: These results suggest that auditory-evoked potentials may be useful in evaluating the benefit of different noise-mitigating hearing aid features.

Pubmed PDF Web

Long-Term Language Development in Children With Early Simultaneous Bilateral Cochlear Implants

Wie, Ona Bø; Torkildsen, Janne von Koss; Schauber, Stefan; Busch, Tobias; Litovsky, Ruth

Publicatie 01-09-2020


Objectives: This longitudinal study followed the language development of children who received the combination of early (5 to 18 months) and simultaneous bilateral cochlear implants (CIs) throughout the first 6 years after implantation. It examined the trajectories of their language development and identified factors associated with language outcomes.Design: Participants were 21 Norwegian children who received bilateral CIs between the ages of 5 and 18 mo and 21 children with normal hearing (NH) who were matched to the children with CIs on age, sex, and maternal education. The language skills of these two groups were compared at 10 time points (3, 6, 9, 12, 18, 24, 36, 48, 60, and 72 months after implantation) using parent reports and standardized measures of general language skills, vocabulary, and grammar. In addition, assessments were made of the effects of age at CI activation, speech recognition abilities, and mothers’ education on language outcomes 6 years after implantation.Results: During the first 4 years after implantation, the gap in general expressive and receptive language abilities between children with CIs and children with NH gradually closed. While at the initial five to six assessments (3 to 36 months after implantation), significant differences between children with CIs and children with NH were observed; at 4 years after implantation, there were no longer any significant group differences in general language skills and most children with CIs achieved scores within 1 SD of the tests’ normative means. From 2 to 3 years after implantation onward, expressive vocabulary and receptive grammar skills of children with CIs were similar to those of the reference group. However, from 4 years after implantation until the end of the observation period, 6 years after implantation, expressive grammar skills of children with CIs were lower than those of children with NH. In addition, a gap in receptive vocabulary appeared and grew increasingly larger from 4 to 6 years postimplantation. At the final assessment, the children with CIs had an average receptive vocabulary score around 1 SD below the normative mean. Regression analysis indicated that the children’s language outcomes at 6 years after implantation were related to their speech recognition skills, age at CI activation, and maternal education.Conclusions: In the first 4 years after implantation, the language performance of children with CIs became increasingly similar to that of their NH peers. However, between 4 and 6 years after implantation, there were indications of challenges with certain aspects of language, specifically receptive vocabulary and expressive grammar. Because these challenges first appeared after the 4-year assessment, the findings underline the importance of long-term language intervention to increase the chances of a continued language development comparable to that of NH peers. They also indicate that there is a need for comprehensive longitudinal studies of the language development of children with CIs beyond 4 years after implantation.

Pubmed PDF Web

The Effect of Pulse Polarity on Neural Response of the Electrically Stimulated Cochlear Nerve in Children With Cochlear Nerve Deficiency and Children With Normal-Sized Cochlear Nerves

Xu, Lei; Skidmore, Jeffrey; Luo, Jianfen; Chao, Xiuhua; Wang, Ruijie; Wang, Haibo; He, Shuman

Publicatie 01-09-2020


Objective: This study aimed to (1) investigate the effect of pulse polarity on neural response of the electrically stimulated cochlear nerve in children with cochlear nerve deficiency (CND) and children with normal-sized cochlear nerves and (2) compare the size of the pulse polarity effect between these two subject groups.Design: The experimental and control group included 31 children with CND and 31 children with normal-sized cochlear nerves, respectively. For each study participant, evoked compound action potential (eCAP) input/output (I/O) functions for anodic-leading and cathodic-leading biphasic stimuli were measured at three electrode locations across the electrode array. The dependent variables of interest included the eCAP amplitude measured at the maximum comfortable level of the anodic stimulus, the lowest level that could evoke an eCAP (i.e., the eCAP threshold), the slope of the eCAP I/O function estimated based on linear regression, the negative-peak (i.e., N1) latency of the eCAP, as well as the size of the pulse polarity effect on these eCAP measurements. Generalized linear mixed effect models were used to compare the eCAP amplitude, the eCAP threshold, the slope of the eCAP I/O function, and the N1 latency evoked by the anodic-leading stimulus with those measured for the cathodic-leading stimulus for children with CND and children with normal-sized cochlear nerves. Generalized linear mixed effect models were also used to compare the size of the pulse polarity effect on the eCAP between these two study groups. The one-tailed Spearman correlation test was used to assess the potential correlation between the pulse phase duration and the difference in N1 latency measured for different pulse polarities.Results: Compared with children who had normal-sized cochlear nerves, children with CND had reduced eCAP amplitudes, elevated eCAP thresholds, flatter eCAP I/O functions, and prolonged N1 latencies. The anodic-leading stimulus led to higher eCAP amplitudes, lower eCAP thresholds, and shorter N1 latencies than the cathodic-leading stimulus in both study groups. Steeper eCAP I/O functions were recorded for the anodic-leading stimulus than those measured for the cathodic-leading stimulus in children with CND, but not in children with normal-sized cochlear nerves. Group differences in the size of the pulse polarity effect on the eCAP amplitude, the eCAP threshold, or the N1 latency were not statistically significant.Conclusions: Similar to the normal-sized cochlear nerve, the hypoplastic cochlear nerve is more sensitive to the anodic-leading than to the cathodic-leading stimulus. Results of this study do not provide sufficient evidence for proving the idea that the pulse polarity effect can provide an indication for local neural health.

Pubmed PDF Web

Comparison of Pure-Tone Thresholds and Cochlear Microphonics Thresholds in Pediatric Cochlear Implant Patients

Coulthurst, Sarah; Nachman, Alison J.; Murray, Mike T.; Koka, Kanthaiah; Saoji, Aniket A.

Publicatie 01-09-2020


Objectives: In adult cochlear implant patients, conventional audiometry is used to measure postoperative residual hearing which requires active listening and patient feedback. However, audiological measurements in pediatric cochlear implant patients are both challenging as well as time consuming. Intracochlear electrocochleography (ECOG) offers an objective and a time-efficient method to measure frequency-specific cochlear microphonic or difference thresholds (CM/DIF) thresholds that closely approximate auditory thresholds in adult cochlear implant patients. The correlation between CM/DIF and behavioral thresholds has not been established in pediatric cochlear implant patients. In the present study, CM/DIF thresholds were compared with audiometric thresholds in pediatric cochlear implant patients with postoperative residual hearing.Design: Thirteen (11 unilateral and 2 bilateral) pediatric cochlear implant patients (mean age = 9.2 years ± 5.1) participated in this study. Audiometric thresholds were estimated using conventional, condition play, or visual reinforcement audiometry. A warble tone stimulus was used to measure audiometric thresholds at 125, 250, 500, 1000, and 2000 Hz. ECOG waveforms were elicited using 50-msec acoustic tone-bursts. The most apical intracochlear electrode was used as the recording electrode with an extra-cochlear ground electrode. The ECOG waveforms were analyzed to determine CM/DIF thresholds that were compared with pediatric cochlear implant patients’ audiometric thresholds.Results: The results show a significant correlation (r = 0.77, p < 0.01) between audiometric and CM/DIF thresholds over a frequency range of 125 to 2000 Hz in pediatric cochlear implant patients. Frequency-specific comparisons revealed a correlation of 0.82, 0.74, 0.69, 0.41, and 0.32 between the audiometric thresholds and CM/DIF thresholds measured at 125, 250, 500, 1000, and 2000 Hz, respectively. An average difference of 0.4 dB (±14 dB) was measured between the audiometric and CM/DIF thresholds.Conclusions: Intracochlear ECOG can be used to measure CM/DIF thresholds in pediatric cochlear implant patients with residual hearing in the implanted ear. The CM/DIF thresholds are similar to the audiometric thresholds at lower test frequencies and offer an objective method to monitor residual hearing in difficult-to-test pediatric cochlear implant patients.

Pubmed PDF Web

The Merits of Bilateral Application of Bone-Conduction Devices in Children With Bilateral Conductive Hearing Loss

den Besten, Chrisje A.; Vogt, Katharina; Bosman, Arjan J.; Snik, Ad F. M.; Hol, Myrthe K. S.; Agterberg, Martijn J. H.

Publicatie 01-09-2020


Objectives: This study aims to characterize lateralization of sounds and localization of sounds in children with bilateral conductive hearing loss (BCHL) when listening with either one or two percutaneous bone conduction devices (BCDs).Design: Sound lateralization was measured with the minimum audible angle test in which children were asked to indicate from which of the two visible speakers the sound originated. Sound localization was measured with a test in which stimuli were presented from speakers that were not visible to the children. In the sound localization test, 150 ms broadband noise bursts were presented, and sound level was roved over a 20-dB range. Because speakers were not visible the localization response was not affected by any visual cue. The sound localization test provides a clear distinction between lateralization and localization of sounds. Ten children with congenital BCHL and one child with acquired BCHL participated.Results: Both lateralization and sound localization were better with bilateral BCDs compared with the unilaterally aided conditions. In the bilateral BCD condition, lateralization was close to normal in nearly all the children. The localization test demonstrated lateralization rather than sound localization behavior when listening with bilateral BCDs. Furthermore, in the unilateral aided condition, stimuli presented at different sound levels were mainly perceived at the same location.Conclusions: This study demonstrates that, in contrast to listening with two BCDs, children demonstrated difficulties in lateralization of sounds and in sound localization when listening with just one BCD (i.e., one BCD turned off). Because both lateralization and sound localization behavior were tested, it could be demonstrated that these children are more able to lateralize than localize sounds when listening with bilateral BCDs. The present study provides insight in (sub-optimal) sound localization capabilities of children with congenital BCHL in the unilateral-aided and bilateral-aided condition. Despite the sub-optimal results on sound localization, this study underlines the merits of bilateral application of BCDs in such children.

Pubmed PDF Web

The Hearing Intervention for the Aging and Cognitive Health Evaluation in Elders Randomized Control Trial: Manualization and Feasibility Study

Sanchez, Victoria A.; Arnold, Michelle L.; Reed, Nicholas S.; Oree, Preyanca H.; Matthews, Courtney R.; Clock Eddins, Ann; Lin, Frank R.; Chisolm, Theresa H.

Publicatie 01-09-2020


Objectives: This work describes the development of a manualized best-practice hearing intervention for older adults participating in the Aging and Cognitive Health Evaluation in Elders (ACHIEVE) randomized controlled clinical trial. Manualization of interventions for clinical trials is critical for assuring intervention fidelity and quality, especially in large multisite studies. The multisite ACHIEVE randomized controlled trial is designed to assess the efficacy of a hearing intervention on rates of cognitive decline in older adults. We describe the development of the manualized hearing intervention through an iterative process that included addressing implementation questions through the completion of a feasibility study (ACHIEVE-Feasibility).Design: Following published recommendations for manualized intervention development, an iterative process was used to define the ACHIEVE-hearing intervention elements and create an initial manual. The intervention was then delivered within the ACHIEVE-Feasibility study using one-group pre-post design appropriate for assessing questions related to implementation. Participants were recruited from the Tampa, Florida area between May 2015 and April 2016. Inclusion criteria were cognitively healthy adults aged 70 to 89 with symmetrical mild-to-moderately severe sensorineural hearing loss. The ACHIEVE-Feasibility study sought to assess the implementation of the manualized hearing intervention by: (1) confirming improvement in expected outcomes were achieved including aided speech-in-noise performance and perception of disease-specific self-report measures; (2) determining whether the participants would comply with the intervention including session attendance and use of hearing aids; and (3) determining whether the intervention sessions could be delivered within a reasonable timeframe.Results: The initial manualized intervention that incorporated the identified best-practice elements was evaluated for feasibility among 21 eligible participants and 9 communication partners. Post-intervention expected outcomes were obtained with speech-in-noise performance results demonstrating a significant improvement under the aided condition and self-reported measures showing a significant reduction in self-perceived hearing handicap. Compliance was excellent, with 20 of the 21 participants (95.2%) completing all intervention sessions and 19 (90.4%) returning for the 6-month post-intervention visit. Furthermore, self-reported hearing aid compliance was >8 hr/day, and the average daily hearing aid use from datalogging was 7.8 hr. Study completion was delivered in a reasonable timeframe with visits ranging from 27 to 85 min per visit. Through an iterative process, the intervention elements were refined, and the accompanying manual was revised based on the ACHIEVE-Feasibility study activities, results, and clinician and participant informal feedback.Conclusion: The processes for the development of a manualized intervention described here provide guidance for future researchers who aim to examine the efficacy of approaches for the treatment of hearing loss in a clinical trial. The manualized ACHIEVE-Hearing Intervention provides a patient-centered, yet standardized, step-by-step process for comprehensive audiological assessment, goal setting, and treatment through the use of hearing aids, other hearing assistive technologies, counseling, and education aimed at supporting self-management of hearing loss. The ACHIEVE-Hearing Intervention is feasible in terms of implementation with respect to verified expected outcomes, compliance, and reasonable timeframe delivery. Our processes assure intervention fidelity and quality for use in the ACHIEVE randomized controlled trial (ClinicalTrials.gov Identifier: NCT03243422).

Pubmed PDF Web

Frequency-to-Place Mismatch: Characterizing Variability and the Influence on Speech Perception Outcomes in Cochlear Implant Recipients

Canfarotta, Michael W.; Dillon, Margaret T.; Buss, Emily; Pillsbury, Harold C.; Brown, Kevin D.; O’Connell, Brendan P.

Publicatie 01-09-2020


Objectives: The spatial position of a cochlear implant (CI) electrode array affects the spectral cues provided to the recipient. Differences in cochlear size and array length lead to substantial variability in angular insertion depth (AID) across and within array types. For CI-alone users, the variability in AID results in varying degrees of frequency-to-place mismatch between the default electric frequency filters and cochlear place of stimulation. For electric-acoustic stimulation (EAS) users, default electric frequency filters also vary as a function of residual acoustic hearing in the implanted ear. The present study aimed to (1) investigate variability in AID associated with lateral wall arrays, (2) determine the subsequent frequency-to-place mismatch for CI-alone and EAS users mapped with default frequency filters, and (3) examine the relationship between early speech perception for CI-alone users and two aspects of electrode position: frequency-to-place mismatch and angular separation between neighboring contacts, a metric associated with spectral selectivity at the periphery.Design: One hundred one adult CI recipients (111 ears) with MED-EL Flex24 (24 mm), Flex28 (28 mm), and FlexSOFT/Standard (31.5 mm) arrays underwent postoperative computed tomography to determine AID. A subsequent comparison was made between AID, predicted spiral ganglion place frequencies, and the default frequency filters for CI-alone (n = 84) and EAS users (n = 27). For CI-alone users with complete insertions who listened with maps fit with the default frequency filters (n = 48), frequency-to-place mismatch was quantified at 1500 Hz and angular separation between neighboring contacts was determined for electrodes in the 1 to 2 kHz region. Multiple linear regression was used to examine how frequency-to-place mismatch and angular separation of contacts influence consonant-nucleus-consonant (CNC) scores through 6 months postactivation.Results: For CI recipients with complete insertions (n = 106, 95.5%), the AID (mean ± standard deviation) of the most apical contact was 428° ± 34.3° for Flex24 (n = 11), 558° ± 65.4° for Flex28 (n = 48), and 636° ± 42.9° for FlexSOFT/Standard (n = 47) arrays. For CI-alone users, default frequency filters aligned closely with the spiral ganglion map for deeply inserted lateral wall arrays. For EAS users, default frequency filters produced a range of mismatches; absolute deviations of ≤ 6 semitones occurred in only 37% of cases. Participants with shallow insertions and minimal or no residual hearing experienced the greatest mismatch. For CI-alone users, both smaller frequency-to-place mismatch and greater angular separation between contacts were associated with better CNC scores during the initial 6 months of device use.Conclusions: There is significant variability in frequency-to-place mismatch among CI-alone and EAS users with default frequency filters, even between individuals implanted with the same array. When using default frequency filters, mismatch can be minimized with longer lateral wall arrays and insertion depths that meet the edge frequency associated with residual hearing for CI-alone and EAS users, respectively. Smaller degrees of frequency-to-place mismatch and decreased peripheral masking due to more widely spaced contacts may independently support better speech perception with longer lateral wall arrays in CI-alone users.

Pubmed PDF Web

Effects of Spectral Resolution and Frequency Mismatch on Speech Understanding and Spatial Release From Masking in Simulated Bilateral Cochlear Implants

Xu, Kevin; Willis, Shelby; Gopen, Quinton; Fu, Qian-Jie

Publicatie 01-09-2020


Objectives: Due to interaural frequency mismatch, bilateral cochlear-implant (CI) users may be less able to take advantage of binaural cues that normal-hearing (NH) listeners use for spatial hearing, such as interaural time differences and interaural level differences. As such, bilateral CI users have difficulty segregating competing speech even when the target and competing talkers are spatially separated. The goal of this study was to evaluate the effects of spectral resolution, tonotopic mismatch (the frequency mismatch between the acoustic center frequency assigned to CI electrode within an implanted ear relative to the expected spiral ganglion characteristic frequency), and interaural mismatch (differences in the degree of tonotopic mismatch in each ear) on speech understanding and spatial release from masking (SRM) in the presence of competing talkers in NH subjects listening to bilateral vocoder simulations.Design: During testing, both target and masker speech were presented in five-word sentences that had the same syntax but were not necessarily meaningful. The sentences were composed of five categories in fixed order (Name, Verb, Number, Color, and Clothes), each of which had 10 items, such that multiple sentences could be generated by randomly selecting a word from each category. Speech reception thresholds (SRTs) for the target sentence presented in competing speech maskers were measured. The target speech was delivered to both ears and the two speech maskers were delivered to (1) both ears (diotic masker), or (2) different ears (dichotic masker: one delivered to the left ear and the other delivered to the right ear). Stimuli included the unprocessed speech and four 16-channel sine-vocoder simulations with different interaural mismatch (0, 1, and 2 mm). SRM was calculated as the difference between the diotic and dichotic listening conditions.Results: With unprocessed speech, SRTs were 0.3 and –18.0 dB for the diotic and dichotic maskers, respectively. For the spectrally degraded speech with mild tonotopic mismatch and no interaural mismatch, SRTs were 5.6 and −2.0 dB for the diotic and dichotic maskers, respectively. When the tonotopic mismatch increased in both ears, SRTs worsened to 8.9 and 2.4 dB for the diotic and dichotic maskers, respectively. When the two ears had different tonotopic mismatch (e.g., there was interaural mismatch), the performance drop in SRTs was much larger for the dichotic than for the diotic masker. The largest SRM was observed with unprocessed speech (18.3 dB). With the CI simulations, SRM was significantly reduced to 7.6 dB even with mild tonotopic mismatch but no interaural mismatch; SRM was further reduced with increasing interaural mismatch.Conclusions: The results demonstrate that frequency resolution, tonotopic mismatch, and interaural mismatch have differential effects on speech understanding and SRM in simulation of bilateral CIs. Minimizing interaural mismatch may be critical to optimize binaural benefits and improve CI performance for competing speech, a typical listening environment. SRM (the difference in SRTs between diotic and dichotic maskers) may be a useful clinical tool to assess interaural frequency mismatch in bilateral CI users and to evaluate the benefits of optimization methods that minimize interaural mismatch.

Pubmed PDF Web

Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users

Barrett, Karen Chan; Chatterjee, Monita; Caldwell, Meredith T.; Deroche, Mickael L. D.; Jiradejvong, Patpong; Kulkarni, Aditya M.; Limb, Charles J.

Publicatie 01-09-2020


Objectives: Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli.Design: Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7–19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child’s hearing history may serve as predictors of performance on vocal emotion recognition.Results: Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody—akin to “motherese”—may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition’s female talker, participants had high sensitivity (d’ scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences.Conclusions: In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.

Pubmed PDF Web

Postural Control While Listening in Younger and Middle-Aged Adults

Helfer, Karen S.; Freyman, Richard L.; van Emmerik, Richard; Banks, Jacob

Publicatie 01-09-2020


Objectives: The motivation for this research is to determine whether a listening-while-balancing task would be sensitive to quantifying listening effort in middle age. The premise behind this exploratory work is that a decrease in postural control would be demonstrated in challenging acoustic conditions, more so in middle-aged than in younger adults.Design: A dual-task paradigm was employed with speech understanding as one task and postural control as the other. For the speech perception task, participants listened to and repeated back sentences in the presence of other sentences or steady-state noise. Targets and maskers were presented in both spatially-coincident and spatially-separated conditions. The postural control task required participants to stand on a force platform either in normal stance (with feet approximately shoulder-width apart) or in tandem stance (with one foot behind the other). Participants also rated their subjective listening effort at the end of each block of trials.Results: Postural control was poorer for both groups of participants when the listening task was completed at a more adverse (vs. less adverse) signal-to-noise ratio. When participants were standing normally, postural control in dual-task conditions was negatively associated with degree of high-frequency hearing loss, with individuals who had higher pure-tone thresholds exhibiting poorer balance. Correlation analyses also indicated that reduced speech recognition ability was associated with poorer postural control in both single- and dual-task conditions. Middle-aged participants exhibited larger dual-task costs when the masker was speech, as compared to when it was noise. Individuals who reported expending greater effort on the listening task exhibited larger dual-task costs when in normal stance.Conclusions: Listening under challenging acoustic conditions can have a negative impact on postural control, more so in middle-aged than in younger adults. One explanation for this finding is that the increased effort required to successfully listen in adverse environments leaves fewer resources for maintaining balance, particularly as people age. These results provide preliminary support for using this type of ecologically-valid dual-task paradigm to quantify the costs associated with understanding speech in adverse acoustic environments.

Pubmed PDF Web

Relevance of Artifact Removal and Number of Stimuli for Video Head Impulse Test Examination

Trinidad-Ruiz, Gabriel; Rey-Martinez, Jorge; Matiño-Soler, Eusebi; Batuecas-Caletrio, Angel; Martin-Sanz, Eduardo; Perez-Fernandez, Nicolas

Publicatie 01-09-2020


Objective: To evaluate the effect of artifacts on the impulse and response recordings with the video head impulse test (VHIT) and determine how many stimuli are necessary for obtaining acceptably efficient measurements.Methods: One hundred fifty patients were examined using VHIT and their registries searched for artifacts. We compared several variations of the dataset. The first variation used only samples without artifacts, the second used all samples (with and without artifacts), and the rest used only samples with each type of artifact. We calculated the relative efficiency (RE) of evaluating an increasingly large number of samples (3 to 19 per side) when compared with the complete sample (20 impulses per side).Results: Overshoot was associated with significantly higher speed (p = 0.005), higher duration (p < 0.001) and lower amplitude of the impulses (p = 0.002), and consequent higher saccades’ latency (p = 0.035) and lower amplitude (p = 0.025). Loss of track was associated with lower gain (p = 0.035). Blink was associated with a higher number of saccades (p < 0.001), and wrong way was associated with lower saccade latency (p = 0.012). The coefficient of quartile deviation escalated as the number of artifacts of any type rose, indicating an increment of variability. Overshoot increased the probability of the impulse to lay on the outlier range for gain and peak speed. Blink did so for the number of saccades, and wrong way for the saccade amplitude and speed. RE reached a tolerable level of 1.1 at 7 to 10 impulses for all measurements except the PR score.Conclusions: Our results suggest the necessity of removing artifacts after collecting VHIT samples to improve the accuracy and precision of results. Ten impulses are sufficient for achieving acceptable RE for all measurements except the PR score.

Pubmed PDF Web

“Aural Patching” After Bilateral Cochlear Implantation Is Challenging for Children With Prior Long-Term Unilateral Implant Experience

Abbasalipour, Parvaneh; Papsin, Blake C.; Gordon, Karen A.

Publicatie 01-09-2020


Objectives: To assess the use of “aural patching” as a strategy to potentially reduce the known persistence of aural preference in children receiving bilateral cochlear implants (CIs) with long inter-implant delays by removing the first device to increase stimulation to the second implanted side.Design: Children/adolescents who received a second CI at 12.8 ± 3.5 years of age after 9.4 ± 2.9 years of unilateral CI use were asked to remove their first CI for regular periods daily (aural patching). Their compliance was monitored, and asymmetries in speech perception were measured at the end of the study period.Results: Partial adherence to aural patching over the first few months of bilateral hearing use markedly declined with time. As expected, the group demonstrated asymmetries in speech perception that were not significantly affected by the limited aural patching.Conclusions: The aural patching protocol was a challenge to maintain for most children and families studied, reflecting both the expected aural preference for the first implanted ear and their challenges to reverse it.

Pubmed PDF Web

Age Affects Speech Understanding and Multitask Costs

Devesse, Annelies; Wouters, Jan; van Wieringen, Astrid

Publicatie 01-09-2020


Objectives: We examined the effect of age on speech understanding and multitask costs in the ecologically relevant “Audiovisual True-to-Life Assessment of Auditory Rehabilitation”-paradigm (AVATAR).Design: Twenty-nine normal-hearing middle-aged adults completed AVATAR, which combines an auditory-visual speech-in-noise task with three secondary tasks on auditory localization or visual short-term memory in different dual-, triple-, and quadruple-task combinations. Performance decrements on the secondary tasks were considered to reflect the cognitive resources allocated during listening. Self-reported hearing difficulties were administered via a questionnaire. Results were compared with scores of 35 young normal-hearing adults.Results: Middle-aged adults performed consistently worse than young adults on speech understanding and, in the triple- and quadruple-task combinations only, on secondary task performance. Furthermore, middle-agers reported higher levels of daily listening concentration and more difficulties with speech understanding.Conclusions: This study demonstrated the adverse effect of age on speech-in-noise understanding and the amount of allocated cognitive resources during challenging listening situations realized in AVATAR.

Pubmed PDF Web

Copyright © KNO-T, 2020 | R/Abma