Ear and Hearing 2022-07-01

Hearing Assessment and Rehabilitation for People Living With Dementia

Dawes, Piers; Littlejohn, Jenna; Bott, Anthea; Brennan, Siobhan; Burrow, Simon; Hopper, Tammy; Scanlan, Emma

Publication date 01-07-2022


Hearing impairment commonly co-occurs with dementia. Audiologists, therefore, need to be prepared to address the specific needs of people living with dementia (PwD). PwD have needs in terms of dementia-friendly clinical settings, assessments, and rehabilitation strategies tailored to support individual requirements that depend on social context, personality, background, and health-related factors, as well as audiometric HL and experience with hearing assistance. Audiologists typically receive limited specialist training in assisting PwD and professional guidance for audiologists is scarce. The aim of this review was to outline best practice recommendations for the assessment and rehabilitation of hearing impairment for PwD with reference to the current evidence base. These recommendations, written by audiology, psychology, speech-language, and dementia nursing professionals, also highlight areas of research need. The review is aimed at hearing care professionals and includes practical recommendations for adapting audiological procedures and processes for the needs of PwD.

Pubmed PDF Web

Evaluation of the I-PLAN Intervention to Promote Hearing Aid Use in New Adult Users: a Randomized Controlled Trial

Ismail, Afzarini H.; Armitage, Christopher J.; Munro, Kevin J.; Marsden, Antonia; Dawes, Piers D.

Publication date 01-07-2022


Objective: Provision of information is already part of standard care and may not be sufficient to promote hearing aid use. The I-PLAN is a behavior change intervention that is designed to promote hearing aid use in adults. It consists of a prompt, an action plan and provision of information. The objective was to test the effectiveness of the I-PLAN prompt and plan components in promoting hearing aid use and benefit.
Hypotheses were: there would be greater hearing aid use, benefit, self-regulation, and hearing aid use habit among participants who received the prompt or plan component, compared with no prompt or no plan component, and the effect would be the greatest in participants who received both prompt and plan; and self-regulation and habit would mediate the effect of prompt and/or plan components on hearing aid use and benefit.
Design: A 2 x 2 factorial randomized controlled trial design. Two hundred forty new adult patients (60 in each group) were randomized to: information (info) only; info + prompt; info + plan; or info + prompt + plan. All participants received treatment as usual in addition to I-PLAN components, which were provided in a sealed envelope at the end of the hearing aid fitting consultation. Participants in the prompt group were instructed to use their hearing aid box as a physical prompt to remind them to use the device. Participants in the plan group were instructed to write an action plan to encourage them to turn their intentions into action. Participants, audiologists, and researchers were blinded to group allocation. The primary outcome was self-reported proportion of time hearing aids were used in situations where they had listening difficulties. Secondary outcomes were hearing aid use derived from data logging, self-reported hearing aid benefit, self-reported self-regulation, and habit. Outcomes were measured at 6-week post-fitting.
Results: Contrary to predictions, participants who received the prompt component reported using their hearing aid less than participants without the prompt (p = 0.03; d = 0.24). The mean proportion of time hearing aid were used was 73.4% of the time in the prompt group compared with 79.9% of the time in the no prompt group. Participants who received the plan component reported using their hearing aids more frequently than those who did not receive the plan (Meanplan = 81.0% vs Meannoplan = 71.8% of the time; p = 0.01; d = 0.34). Receiving both prompt and plan components did not change self-reported proportion of time hearing aids were used but data-logging use was significantly reduced. The prompt reduced self-regulation of hearing aid use compared with the no prompt (p = 0.04; d = 0.28), while the plan promoted stronger hearing aid use habits than the no plan group (p = 0.02; d = 0.30).
Conclusions: Audiologists should consider using action plans to promote hearing aid use. Despite the decrease in hearing aid use when using the hearing aid box as a physical prompt, hearing aid use was still high (≈70% of the time). The hearing aid box may have slightly reduced hearing aid use by undermining self-regulation. Participants may have delegated responsibility for hearing aid use to the prompt. Subsequent studies should evaluate different prompts and test the long-term benefit of the plan on hearing aid use via habit formation.

Pubmed PDF Web

The Interrelationship of Tinnitus and Hearing Loss Secondary to Age, Noise Exposure, and Traumatic Brain Injury

Clifford, Royce Ellen; Ryan, Allen F.; on behalf of VA Million Veteran Program4

Publication date 01-07-2022


Objective: Tinnitus has been the No. 1 disability at the Veteran Administration for the last 15 years, yet its interaction with hearing loss secondary to etiologies such as age, noise trauma, and traumatic brain injuries remains poorly characterized. Our objective was to analyze hearing loss and tinnitus, including audiogram data, of the Million Veteran Program within the context of military exposures in an aging population.
Design: Health records, questionnaires, audiograms, and military data were aggregated for 758,005 Veteran participants in the Million Veteran Program 2011 to 2020, with relative risks (RR) calculated for ancestries, sex, hearing loss and military exposures such as combat, blast, and military era served. A multivariate model with significant demographic measures and exposures was then analyzed. Next, audiogram data stratified by sex were compared for those with and without tinnitus by two methods: first, mean thresholds at standard frequencies were compared to thresholds adjusted per ISO 7029:2000E age and sex formulae. Second, levels for those ≤40 years of age were compared with those 41 and older. Finally, a proportional hazards model was examined to ascertain the timing between the onset of tinnitus and hearing loss, calculated separately for electronic health record diagnoses (ICD) and self-report.
Results: Tinnitus was either self-reported, diagnosed, or both in 37.5% (95% CI, 37.4 to 37.6), mean age 61.5 (95% CI, 61.4 to 61.5), range 18 to 112 years. Those with hearing loss were 4.15 times (95% CI, 4.12 to 4.15) as likely to have tinnitus. Americans of African descent were less likely to manifest tinnitus (RR 0.61, 95% CI, 0.60 to 0.61), as were women (RR 0.65, 95% CI, 0.64 to 0.65). A multivariate model indicated a higher RR of 1.73 for traumatic brain injury (95% CI, 1.71 to 1.73) and daily combat noise exposure (1.17, 95% CI, 1.14 to 1.17) than age (0.998, 95% CI, 0.997 to 0.998). Subjects ≤40 years of age had small but significantly elevated hearing thresholds through all standard frequencies compared to Veterans without tinnitus, and the effect of tinnitus on hearing thresholds diminished with age. In the hazard model, those >40 with new onset of tinnitus were at risk for hearing loss sooner and with greater incidence than those who were younger. The rate of hearing loss following tinnitus approached 100%. In contrast, only approximately 50% of those who self-reported hearing loss initially were at risk for later hearing loss, in contrast to ICD comparison, where those with ICD of hearing loss were more likely to sustain an ICD of tinnitus subsequently.
Conclusions: Evidence suggests that the occurrence of tinnitus in the military is more closely related to environmental exposures than to aging. The finding that tinnitus affects hearing frequencies across the audiogram spectrum suggests an acoustic injury independent of tonotopicity. Particularly for males >40, tinnitus may be a harbinger of audiologic damage predictive of later hearing loss.

Pubmed PDF Web

Age and Hearing Ability Influence Selective Attention During Childhood

Ward, Kristina M.; Grieco-Calub, Tina M.

Publication date 01-07-2022


Objectives: The purpose of the present study was to determine whether age and hearing ability influence selective attention during childhood. Specifically, we hypothesized that immaturity and disrupted auditory experience impede selective attention during childhood.
Design: Seventy-seven school-age children (5 to 12 years of age) participated in this study: 61 children with normal hearing and 16 children with bilateral hearing loss who use hearing aids and/or cochlear implants. Children performed selective attention-based behavioral change detection tasks comprised of target and distractor streams in the auditory and visual modalities. In the auditory modality, children were presented with two streams of single-syllable words spoken by a male and female talker. In the visual modality, children were presented with two streams of grayscale images. In each task, children were instructed to selectively attend to the target stream, inhibit attention to the distractor stream, and press a key as quickly as possible when they detected a frequency (auditory modality) or color (visual modality) deviant stimulus in the target, but not distractor, stream. Performance on the auditory and visual change detection tasks was quantified by response sensitivity, which reflects children’s ability to selectively attend to deviants in the target stream and inhibit attention to those in the distractor stream. Children also completed a standardized measure of attention and inhibitory control.
Results: Younger children and children with hearing loss demonstrated lower response sensitivity, and therefore poorer selective attention, than older children and children with normal hearing, respectively. The effect of hearing ability on selective attention was observed across the auditory and visual modalities, although the extent of this group difference was greater in the auditory modality than the visual modality due to differences in children’s response patterns. Additionally, children’s performance on a standardized measure of attention and inhibitory control related to their performance during the auditory and visual change detection tasks.
Conclusions: Overall, the findings from the present study suggest that age and hearing ability influence children’s ability to selectively attend to a target stream in both the auditory and visual modalities. The observed differences in response patterns across modalities, however, reveal a complex interplay between hearing ability, task modality, and selective attention during childhood. While the effect of age on selective attention is expected to reflect the immaturity of cognitive and linguistic processes, the effect of hearing ability may reflect altered development of selective attention due to disrupted auditory experience early in life and/or a differential allocation of attentional resources to meet task demands.

Pubmed PDF Web

Reverberation Degrades Pitch Perception but Not Mandarin Tone and Vowel Recognition of Cochlear Implant Users

Xu, Lei; Luo, Jianfen; Xie, Dianzhao; Chao, Xiuhua; Wang, Ruijie; Zahorik, Pavel; Luo, Xin

Publication date 01-07-2022


Objectives: The primary goal of this study was to investigate the effects of reverberation on Mandarin tone and vowel recognition of cochlear implant (CI) users and normal-hearing (NH) listeners. To understand the performance of Mandarin tone recognition, this study also measured participants’ pitch perception and the availability of temporal envelope cues in reverberation.
Design: Fifteen CI users and nine NH listeners, all Mandarin speakers, were asked to recognize Mandarin single-vowels produced in four lexical tones and rank harmonic complex tones in pitch with different reverberation times (RTs) from 0 to 1 second. Virtual acoustic techniques were used to simulate rooms with different degrees of reverberation. Vowel duration and correlation between amplitude envelope and fundamental frequency (F0) contour were analyzed for different tones as a function of the RT.
Results: Vowel durations of different tones significantly increased with longer RTs. Amplitude-F0 correlation remained similar for the falling Tone 4 but greatly decreased for the other tones in reverberation. NH listeners had robust pitch-ranking, tone recognition, and vowel recognition performance as the RT increased. Reverberation significantly degraded CI users’ pitch-ranking thresholds but did not significantly affect the overall scores of tone and vowel recognition with CIs. Detailed analyses of tone confusion matrices showed that CI users reduced the flat Tone-1 responses but increased the falling Tone-4 responses in reverberation, possibly due to the falling amplitude envelope of late reflections after the original vowel segment. CI users’ tone recognition scores were not correlated with their pitch-ranking thresholds.
Conclusions: NH listeners can reliably recognize Mandarin tones in reverberation using salient pitch cues from spectral and temporal fine structures. However, CI users have poorer pitch perception using F0-related amplitude modulations that are reduced in reverberation. Reverberation distorts speech amplitude envelopes, which affect the distribution of tone responses but not the accuracy of tone recognition with CIs. Recognition of vowels with stationary formant trajectories is not affected by reverberation for both NH listeners and CI users, regardless of the available spectral resolution. Future studies should test how the relatively stable vowel and tone recognition may contribute to sentence recognition in reverberation of Mandarin-speaking CI users.

Pubmed PDF Web

Development and Evaluation of a Language-Independent Test of Auditory Discrimination for Referrals for Cochlear Implant Candidacy Assessment

Ching, Teresa Y.C.; Dillon, Harvey; Hou, Sanna; Seeto, Mark; Sodan, Ana; Chong-White, Nicky

Publication date 01-07-2022


Objectives: The purpose of this study was to (1) develop a Language-independent Test of Auditory Discrimination (LIT-AD) between speech sounds so that people with hearing loss who derive limited speech perception benefits from hearing aids (HAs) may be identified for consideration of cochlear implantation and (2) examine the relationship between the scores for the new discrimination test and those of a standard sentence test for adults wearing either HAs or cochlear implants (CIs).
Design: The test measures the ability of the listener to correctly discriminate pairs of nonsense syllables, presented as sequential triplets in an odd-one-out format, implemented as a game-based software tool for self-administration using a tablet computer. Stage 1 included first a review of phonemic inventories in the 40 most common languages in the world to select the consonants and vowels. Second, discrimination testing of 50 users of CIs at several signal to noise ratios (SNRs) was carried out to generate psychometric functions. These were used to calculate the corrections in SNR for each consonant-pair and vowel combination required to equalize difficulty across items. Third, all items were individually equalized in difficulty and the overall difficulty set. Stage 2 involved the validation of the LIT-AD in English-speaking listeners by comparing discrimination scores with performance in a standard sentence test. Forty-one users of HAs and 40 users of CIs were assessed. Correlation analyses were conducted to examine test–retest reliability and the relationship between performance in the two tests. Multiple regression analyses were used to examine the relationship between demographic characteristics and performance in the LIT-AD. The scores of the CI users were used to estimate the probability of superior performance with CIs for a non-CI user having a given LIT-AD score and duration of hearing loss.
Results: The LIT-AD comprises 81 pairs of vowel–consonant–vowel syllables that were equalized in difficulty to discriminate. The test can be self-administered on a tablet computer, and it takes about 10 min to complete. The software automatically scores the responses and gives an overall score and a list of confusable items as output. There was good test–retest reliability. On average, higher LIT-AD discrimination scores were associated with better sentence perception for users of HAs (r = −0.54, p <0.001) and users of CIs (r = −0.73, p <0.001). The probability of superior performance with CIs for a certain LIT-AD score was estimated, after allowing for the effect of duration of hearing loss.
Conclusions: The LIT-AD could increase access to CIs by screening for those who obtain limited benefits from HAs to facilitate timely referrals for CI candidacy evaluation. The test results can be used to provide patients and professionals with practical information about the probability of potential benefits for speech perception from cochlear implantation. The test will need to be evaluated for speakers of languages other than English to facilitate adoption in different countries.

Pubmed PDF Web

Predictive Sentence Context Reduces Listening Effort in Older Adults With and Without Hearing Loss and With High and Low Working Memory Capacity

Hunter, Cynthia R.; Humes, Larry E.

Publication date 01-07-2022


Objectives: Listening effort is needed to understand speech that is degraded by hearing loss, a noisy environment, or both. This in turn reduces cognitive spare capacity, the amount of cognitive resources available for allocation to concurrent tasks. Predictive sentence context enables older listeners to perceive speech more accurately, but how does contextual information affect older adults’ listening effort? The current study examines the impacts of sentence context and cognitive (memory) load on sequential dual-task behavioral performance in older adults. To assess whether effects of context and memory load differ as a function of older listeners’ hearing status, baseline working memory capacity, or both, effects were compared across separate groups of participants with and without hearing loss and with high and low working memory capacity.
Design: Participants were older adults (age 60–84 years; n = 63) who passed a screen for cognitive impairment. A median split classified participants into groups with high and low working memory capacity. On each trial, participants listened to spoken sentences in noise and reported sentence-final words that were either predictable or unpredictable based on sentence context, and also recalled short (low-load) or long (high-load) sequences of digits that were presented visually before each spoken sentence. Speech intelligibility was quantified as word identification accuracy, and measures of listening effort included digit recall accuracy, and response time to words and digits. Correlations of context benefit in each dependent measure with working memory and vocabulary were also examined.
Results: Across all participant groups, accuracy and response time for both word identification and digit recall were facilitated by predictive context, indicating that in addition to an improvement in intelligibility, listening effort was also reduced when sentence-final words were predictable. Effects of predictability on all listening effort measures were observed whether or not trials with an incorrect word identification response were excluded, indicating that the effects of predictability on listening effort did not depend on speech intelligibility. In addition, although cognitive load did not affect word identification accuracy, response time for word identification and digit recall, as well as accuracy for digit recall, were impaired under the high-load condition, indicating that cognitive load reduced the amount of cognitive resources available for speech processing. Context benefit in speech intelligibility was positively correlated with vocabulary. However, context benefit was not related to working memory capacity.
Conclusions: Predictive sentence context reduces listening effort in cognitively healthy older adults resulting in greater cognitive spare capacity available for other mental tasks, irrespective of the presence or absence of hearing loss and baseline working memory capacity.

Pubmed PDF Web

Parameter-Specific Morphing Reveals Contributions of Timbre to the Perception of Vocal Emotions in Cochlear Implant Users

von Eiff, Celina I.; Skuk, Verena G.; Zäske, Romi; Nussbaum, Christine; Frühholz, Sascha; Feuer, Ute; Guntinas-Lichius, Orlando; Schweinberger, Stefan R.

Publication date 01-07-2022


Objectives: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing.
Design: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level.
Results: Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings.
Conclusions: Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.

Pubmed PDF Web

Apical Reference Stimulation: A Possible Solution to Facial Nerve Stimulation

van der Westhuizen, Jacques; Hanekom, Tania; Hanekom, Johan J.

Publication date 01-07-2022


Objectives: Postimplantation facial nerve stimulation is a common side-effect of intracochlear electrical stimulation. Facial nerve stimulation occurs when electric current intended to stimulate the auditory nerve, spread beyond the cochlea to excite the nearby facial nerve, causing involuntarily facial muscle contractions. Facial nerve stimulation can often be resolved through adjustments in speech processor fitting but, in some instances, these measures exhibit limited benefit or may have a detrimental effect on speech perception. In this study, apical reference stimulation mode was investigated as a potential intervention to facial nerve stimulation. Apical reference stimulation is a bipolar stimulation strategy in which the most apical electrode is used as the reference electrode for stimulation on all the other intracochlear electrodes.
Design: A person-specific model of the human cochlea, facial nerve and electrode array, coupled with a neural model, was used to predict excitation of auditory and facial nerve fibers. These predictions were used to evaluate the effectiveness in reducing facial nerve stimulation using apical reference stimulation. Predictions were confirmed in psychoacoustic tests by determining auditory comfort and threshold levels for the apical reference stimulation mode while capturing electromyography data in two participants.
Results: Models predicted a favorable outcome for apical reference stimulation, as facial nerve fiber thresholds were higher and auditory thresholds were lower, in direct comparison to conventional monopolar stimulation. Psychophysical tests also illustrated decreased auditory thresholds and increased dynamic range during apical reference stimulation. Furthermore, apical reference stimulation resulted in lower electromyography energy levels, compared to conventional monopolar stimulation, which suggests a reduction in facial nerve stimulation. Subjective feedback corroborated that apical reference stimulation alleviated facial nerve stimulation.
Conclusion: Apical reference stimulation may be a viable strategy to alleviate facial nerve stimulation considering the improvements in dynamic range and auditory thresholds, complemented with a reduction in facial nerve stimulation symptoms.

Pubmed PDF Web

Hearing Features and Cochlear Implantation Outcomes in Patients With PathogenicMYO15AVariants: a Multicenter Observational Study

Chen, Pey-Yu; Tsai, Cheng-Yu; Wu, Jiunn-Liang; Li, Yi-Lu; Wu, Che-Ming; Chen, Kuang-Chao; Hwang, Chung-Feng; Wu, Hung-Pin; Lin, Hung-Ching; Cheng, Yen-Fu; Lo, Ming-Yu; Liu, Tien-Chen; Yang, Ting-Hua; Chen, Pei-Lung; Hsu, Chuan-Jen; Wu, Chen-Chi

Publication date 01-07-2022


Objectives: Recessive variants in theMYO15Agene constitute an important cause of sensorineural hearing impairment (SNHI). However, the clinical features ofMYO15A-related SNHI have not been systemically investigated. This study aimed to delineate the hearing features and outcomes in patients with pathogenicMYO15Avariants.
Design: This study recruited 40 patients with biallelicMYO15Avariants from 31 unrelated families. The patients were grouped based on the presence of N-terminal domain variants (N variants). The longitudinal audiological data and for those undergoing cochlear implantation, the auditory and speech performance with cochlear implants, were ascertained and compared between patients with different genotypes.
Results: At the first audiometric examination, 32 patients (80.0%) presented with severe to profound SNHI. Patients with at least one allele of the N variant exhibited significantly better hearing levels than those with biallelic non-N variants (78.2 ± 23.9 dBHL and 94.7 ± 22.8 dBHL, respectively) (p= 0.033). Progressive SNHI was observed in 82.4% of patients with non-profound SNHI, in whom the average progression rate of hearing loss was 6.3 ± 4.8 dBHL/year irrespective of the genotypes. Most of the 25 patients who underwent cochlear implantation exhibited favorable auditory and speech performances post-implantation.
Conclusions: The hearing features of patients with biallelic pathogenicMYO15Avariants are characterized by severe to profound SNHI, rapid hearing progression, and favorable outcomes with cochlear implants. Periodic auditory monitoring is warranted for these patients to enable early intervention.

Pubmed PDF Web

Threshold Equalizing Noise Test Reveals Suprathreshold Loss of Hearing Function, Even in the “Normal” Audiogram Range

Stone, Michael A.; Perugia, Emanuele; Bakay, Warren; Lough, Melanie; Whiston, Helen; Plack, Christopher J.

Publication date 01-07-2022


Objectives: The threshold equalizing noise (TEN(HL)) is a clinically administered test to detect cochlear “dead regions” (i.e., regions of loss of inner hair cell IHC connectivity), using a “pass/fail” criterion based on the degree of elevation of a masked threshold in a tone-detection task. With sensorineural hearing loss, some elevation of the masked threshold is commonly observed but usually insufficient to create a “fail” diagnosis. The experiment reported here investigated whether the gray area between pass and fail contained information that correlated with factors such as age or cumulative high-level noise exposure (>100 dBA sound pressure levels), possibly indicative of damage to cochlear structures other than the more commonly implicated outer hair cells.
Design: One hundred and twelve participants (71 female) who underwent audiometric screening for a sensorineural hearing loss, classified as either normal or mild, were recruited. Their age range was 32 to 74 years. They were administered the TEN test at four frequencies, 0.75, 1, 3, and 4 k Hz, and at two sensation levels, 12 and 24 dB above their pure-tone absolute threshold at each frequency. The test frequencies were chosen to lie either distinctly away from, or within, the 2 to 6 k Hz region where noise-induced hearing loss is first clinically observed as a notch in the audiogram. Cumulative noise exposure was assessed by the Noise Exposure Structured Interview (NESI). Elements of the NESI also permitted participant stratification by music experience.
Results: Across all frequencies and testing levels, a strong positive correlation was observed between elevation of TEN threshold and absolute threshold. These correlations were little-changed even after noise exposure and music experience were factored out. The correlations were observed even within the range of “normal” hearing (absolute thresholds ≤15 dB HL).
Conclusions: Using a clinical test, sensorineural hearing deficits were observable even within the range of clinically “normal” hearing. Results from the TEN test residing between “pass” and “fail” are dominated by processes not related to IHCs. The TEN test for IHC-related function should therefore only be considered for its originally designed function, to generate a binary decision, either pass or fail.

Pubmed PDF Web

The Resting State Central Auditory Network: a Potential Marker of HIV-Related Central Nervous System Alterations

Zhan, Yi; Yu, Qiurong; Cai, Dan-Chao; Ford, James C.; Shi, Xiudong; Fellows, Abigail M.; Clavier, Odile H.; Soli, Sigfrid D.; Fan, Mingxia; Lu, Hongzhou; Zhang, Zhiyong; Buckey, Jay C.; Shi, Yuxin

Publication date 01-07-2022


Objective: HIV positive (HIV+) individuals with otherwise normal hearing ability show central auditory processing deficits as evidenced by worse performance in speech-in-noise perception compared with HIV negative (HIV−) controls. HIV infection and treatment are also associated with lower neurocognitive screening test scores, suggesting underlying central nervous system damage. To determine how central auditory processing deficits in HIV+ individuals relate to brain alterations in the cortex involved with auditory processing, we compared auditory network (AN) functional connectivity between HIV+ adults with or without speech-in-noise perception difficulties and age-matched HIV− controls using resting-state fMRI.
Design: Based on the speech recognition threshold of the hearing-in-noise test, twenty-seven HIV+ individuals were divided into a group with speech-in-noise perception abnormalities (HIV+SPabnl, 38.2 ± 6.8 years; 11 males and 2 females) and one without (HIV+SPnl 34.4 ± 8.8 years; 14 males). An HIV− group with normal speech-in-noise perception (HIV−, 31.3 ± 5.2 years; 9 males and 3 females) was also enrolled. All of these younger and middle-aged adults had normal peripheral hearing determined by audiometry. Participants were studied using resting-state fMRI. Independent component analysis was applied to identify the AN. Group differences in the AN were identified using statistical parametric mapping.
Results: Both HIV+ groups had increased functional connectivity (FC) in parts of the AN including the superior temporal gyrus, middle temporal gyrus, supramarginal gyrus, and Rolandic operculum compared to the HIV− group. Compared with the HIV+SPnl group, the HIV+SPabnl group showed greater FC in parts of the AN including the middle frontal and inferior frontal gyri.
Conclusions: The classical auditory areas in the temporal lobe are affected by HIV regardless of speech perception ability. Increased temporal FC in HIV+ individuals might reflect functional compensation to achieve normal primary auditory perception. Furthermore, increased frontal FC in the HIV+SPabnl group compared with the HIV+SPnl group suggest that speech-in-noise perception difficulties in HIV-infected adults also affect areas involved in higher-level cognition, providing imaging evidence consistent with the hypothesis that HIV-related neurocognitive deficits can include central auditory processing deficits.

Pubmed PDF Web

Neural Adaptation of the Electrically Stimulated Auditory Nerve Is Not Affected by Advanced Age in Postlingually Deafened, Middle-aged, and Elderly Adult Cochlear Implant Users

He, Shuman; Skidmore, Jeffrey; Conroy, Sara; Riggs, William J.; Carter, Brittney L.; Xie, Ruili

Publication date 01-07-2022


Objective: This study aimed to investigate the associations between advanced age and the amount and the speed of neural adaptation of the electrically stimulated auditory nerve (AN) in postlingually deafened adult cochlear implant (CI) users.
Design: Study participants included 26 postlingually deafened adult CI users, ranging in age between 28.7 and 84.0 years (mean: 63.8 years, SD: 14.4 years) at the time of testing. All study participants used a Cochlear Nucleus device with a full electrode array insertion in the test ear. The stimulus was a 100-ms pulse train with a pulse rate of 500, 900, 1800, or 2400 pulses per second (pps) per channel. The stimulus was presented at the maximum comfortable level measured at 2400 pps with a presentation rate of 2 Hz. Neural adaptation of the AN was evaluated using electrophysiological measures of the electrically evoked compound action potential (eCAP). The amount of neural adaptation was quantified by the adaptation index (AI) within three time windows: around 0 to 8 ms (window 1), 44 to 50 ms (window 2), and 94 to 100 ms (window 3). The speed of neural adaptation was quantified using a two-parameter power law estimation. In 23 participants, four electrodes across the electrode array were tested. In three participants, three electrodes were tested. Results measured at different electrode locations were averaged for each participant at each pulse rate to get an overall representation of neural adaptation properties of the AN across the cochlea. Linear-mixed models (LMMs) were used (1) to evaluate the effects of age at testing and pulse rate on the speed of neural adaptation and (2) to assess the effects of age at testing, pulse rate, and duration of stimulation (i.e., time window) on the amount of neural adaptation in these participants.
Results: There was substantial variability in both the amount and the speed of neural adaptation of the AN among study participants. The amount and the speed of neural adaptation increased at higher pulse rates. In addition, larger amounts of adaptation were observed for longer durations of stimulation. There was no significant effect of age on the speed or the amount of neural adaptation.
Conclusions: The amount and the speed of neural adaptation of the AN are affected by both the pulse rate and the duration of stimulation, with higher pulse rates and longer durations of stimulation leading to faster and greater neural adaptation. Advanced age does not affect neural adaptation of the AN in postlingually deafened, middle-aged and elderly adult CI users.

Pubmed PDF Web

Wideband Tympanometry Findings in School-aged Children: Effects of Age, Gender, Ear Laterality, and Ethnicity

Downing, Cerys; Kei, Joseph; Driscoll, Carlie; Choi, Robyn; Scott, Dion

Publication date 01-07-2022


Objectives: Wideband tympanometry (WBT) measures middle-ear function across a range of frequencies (250 to 8000 Hz) while the ear-canal pressure is varied from +200 to –300 da Pa. WBT is a suitable test to evaluate middle-ear function in children, but there is a lack of age-, ear-, gender-, or ethnicity-specific data throughout the literature. The purpose of this study was to investigate the effects of age, ear laterality, gender, and ethnicity on the WBT data retrieved from children aged 4 to 13 years determined to have normal middle-ear function.
Design: Data were collected cross-sectionally from 924 children aged 4 to 13 years who passed a test battery consisting of 226-Hz tympanometry, ipsilateral acoustic stapedial reflexes, and pure-tone screening, and without significant history of middle-ear dysfunction.
Participants were grouped according to their age: 4 to 6 years, 7 to 9 years, 10 to 13 years. Wideband absorbance values were extracted at 0 da Pa (WBA0) and tympanometric peak pressure (WBATPP).
Results: The effects of age, frequency, and pressure (WBA0 versus WBATPP) were statistically significant. There were significant differences between WBA0 and WBATPP for all age groups such that WBA0 had lower absorbance at low frequencies (250 to 1600 Hz) and greater absorbance at mid to high frequencies (2500 to 8000 Hz). Statistically significant effects of age were present for WBA0 and WBATPP such that absorbance generally increased with age from 250 to 1250 Hz and decreased with age from 2000 to 5000 Hz. There were no significant main effects of gender, ear, or ethnicity.
Conclusions: Gender-, ear-, and ethnicity-specific clinical WBA0 and WBATPP norms are not required for diagnostic purposes; however, age-specific norms may be necessary. Age-related changes in middle-ear function were observed across WBA0 and WBATPP. The data presented in this study are a suitable clinical reference for evaluating the outer- and middle-ear function of school-aged children.

Pubmed PDF Web

Clinical Spectrum of Positional Vertigo in an Outpatient Setting

Chen, Chih-Chung; Wang, Chen-Yu; Chen, Po-Yueh; Chen, Mei-Chien; Lee, Ting-Yi; Lee, Hsun-Hua

Publication date 01-07-2022


Objectives: To explore the clinical spectrum of positional vertigo (PV) and to study the causes of PV with atypical positional nystagmus (PN) and PV without PN.
Design: We retrospectively analyzed the registry (2425 cases) in a university hospital. Patients who actively reported PV as their main dizziness pattern were included.
Candidates were divided into three groups according to their PN: (1) benign paroxysmal PV (BPPV); (2) PV with atypical PN; and (3) PV without PN. The diagnoses and reported symptoms in each group were analyzed.
Results: PV was the most commonly (n = 518, 28.3%) reported pattern in the registry. The two most common diagnoses of PV were BPPV (n = 146, 29.2%) and vestibular migraine (VM; n = 137, 27.4%). Fifty-seven (11.4%) patients had PV with atypical PN, the majority of which was caused by VM. Moreover, 297 (59.4%) patients had PV without PN. The two main diagnoses in this group were VM and functional dizziness, although the cause remained uncertain in 23.9% of the cases of PV without PN. The odds ratio of VM was 3.95 in patients with PV who reported headaches.
Conclusions: PV is the most common self-reported dizziness pattern and is predominantly caused by BPPV and VM. VM is the most common cause of PV with atypical PN and PV without PN. Clinicians often erroneously assume the presence of PN in those with PV. Managing PV without PN can be challenging because of the uncertainty surrounding this phenomenon. Structured patient-oriented questionnaires assist clinicians in making timely diagnoses and adjusting treatment goals accordingly.

Pubmed PDF Web

The Impact of Synchronized Cochlear Implant Sampling and Stimulation on Free-Field Spatial Hearing Outcomes: Comparing the ciPDA Research Processor to Clinical Processors

Dennison, Stephen R.; Jones, Heath G.; Kan, Alan; Litovsky, Ruth Y.

Publication date 01-07-2022


Objectives: Bilateral cochlear implant (BiCI) listeners use independent processors in each ear. This independence and lack of shared hardware prevents control of the timing of sampling and stimulation across ears, which precludes the development of bilaterally-coordinated signal processing strategies. As a result, these devices potentially reduce access to binaural cues and introduce disruptive artifacts. For example, measurements from two clinical processors demonstrate that independently-running processors introduce interaural incoherence. These issues are typically avoided in the laboratory by using research processors with bilaterally-synchronized hardware. However, these research processors do not typically run in real-time and are difficult to take out into the real-world due to their benchtop nature. Hence, the question of whether just applying hardware synchronization to reduce bilateral stimulation artifacts (and thereby potentially improve functional spatial hearing performance) has been difficult to answer. The CI personal digital assistant (ciPDA) research processor, which uses one clock to drive two processors, presented an opportunity to examine whether synchronization of hardware can have an impact on spatial hearing performance.
Design: Free-field sound localization and spatial release from masking (SRM) were assessed in 10 BiCI listeners using both their clinical processors and the synchronized ciPDA processor. For sound localization, localization accuracy was compared within-subject for the two processor types. For SRM, speech reception thresholds were compared for spatially separated and co-located configurations, and the amount of unmasking was compared for synchronized and unsynchronized hardware. There were no deliberate changes of the sound processing strategy on the ciPDA to restore or improve binaural cues.
Results: There was no significant difference in localization accuracy between unsynchronized and synchronized hardware (p = 0.62). Speech reception thresholds were higher with the ciPDA. In addition, although five of eight participants demonstrated improved SRM with synchronized hardware, there was no significant difference in the amount of unmasking due to spatial separation between synchronized and unsynchronized hardware (p = 0.21).
Conclusions: Using processors with synchronized hardware did not yield an improvement in sound localization or SRM for all individuals, suggesting that mere synchronization of hardware is not sufficient for improving spatial hearing outcomes. Further work is needed to improve sound coding strategies to facilitate access to spatial hearing cues. This study provides a benchmark for spatial hearing performance with real-time, bilaterally-synchronized research processors.

Pubmed PDF Web

Video Head Impulse Test in Darkness, Without Visual Fixation: A Study on Healthy Subjects

Pérez-Vázquez, Paz; Franco-Gutiérrez, Virginia

Publication date 01-07-2022


Objective: The head impulse test (HIT) is triggered by the vestibulo-ocular reflex (VOR), complemented by the optokinetic and pursuit systems. This study aimed to evaluate the possibility of individualizing the VOR contribution to the HIT.
Design: Thirty-six healthy individuals (19 males, 17 females; age 21–64 years, mean 39 years) underwent horizontal video HIT (vHIT). This was first conducted in darkness, without visual fixation, and then visually tracked.
Results: Seventy percent of the impulses delivered ocular responses opposite to the direction of the head, matching its velocity to a point where quick anticompensatory eye movements (SQEM) stopped the response (SQEM mean latency 58.21 ms, interquartile range 50–67 ms). Of these, 75% recaptured the head velocity after culmination. Thirty percent of the responses completed a bell-shaped curve. The completed bell-shaped curve gains and instantaneous gains (at 40, 60, and 80 ms) before SQEM were equivalent for both paradigms. Females completed more bell-shaped traces (42%) than males (15%); p = 0.01. The SQEM latency was longer (62.81 versus 55.71 ms, p < 0.01), and the time to recapture the bell-shaped curve was shorter (77.51 versus 92.52 ms, p < 0.01) in females than in males. The gains were comparable between sexes in both paradigms.
Conclusions: The VOR effect can be localized in the first 70 ms of the vHIT response. In addition, other influences may take place in estimating the vHIT responses. The study of these influences might provide useful information that can be applied to patient management.

Pubmed PDF Web

Cholesteatoma Is Associated With Pediatric Progressive Sensorineural Hearing Loss

Racca, Jordan M.; Lee, John; Sikorski, Faith; Crenshaw, E. Bryan III; Hood, Linda J.

Publication date 01-07-2022


Objectives: This study identified an association between cholesteatoma and progressive sensorineural hearing loss using a large pediatric longitudinal audiologic dataset. Cholesteatoma is a potential sequela of chronic otitis media with effusion, a commonly observed auditory pathology that can contribute to hearing loss in children. The purpose of this report is to (i) describe the process of identifying the association between cholesteatoma and progressive sensorineural hearing loss in a large pediatric dataset and (ii) describe the audiologic data acquired over time in patients identified with cholesteatoma-associated progressive sensorineural hearing loss.
Design: Records of patients included in the Audiologic and Genetics Database (n = 175,215 patients) were examined using specified criteria defining progressive hearing loss. A linear regression model examined the log frequency of all diagnostic codes in the electronic health record assigned to patients for a progressive hearing loss cohort compared with a stable hearing loss group. Based on findings from the linear regression analysis, longitudinal audiometric air (AC) and bone conduction (BC) thresholds were extracted for groups of subjects with cholesteatoma-associated progressive (n = 58 subjects) and stable (n = 55 subjects) hearing loss to further analyze changes in hearing over time.
Results: The linear regression analyses identified that diagnostic codes for cholesteatoma were associated with progressive sensorineural hearing loss in children. The longitudinal audiometric data demonstrated within-subject changes in masked BC sensitivity consistent with progressive sensorineural hearing loss in children diagnosed with cholesteatoma. Additional analyses showed that mastoidectomy surgeries did not appear to contribute to the observed progressive hearing loss and that a high number of cholesteatoma patients with progressive hearing loss had normal-hearing thresholds at their first test.
Conclusions: The statistical analyses demonstrated an association between cholesteatoma and pediatric progressive sensorineural hearing loss. These findings inform clinical management by suggesting that children with cholesteatoma diagnoses may be at increased risk for progressive sensorineural hearing loss and should receive continued monitoring even after a normal masked BC baseline has been established.

Pubmed PDF Web

Pure Tone Audiometry Evaluation Method Effectiveness in Detecting Hearing Changes Due to Workplace Ototoxicant, Continuous Noise, and Impulse Noise Exposures

Blair, Marc; Slagley, Jeremy; Schaal, Nicholas Cody

Publication date 01-07-2022


Objectives: The purpose of this retrospective cohort study was to compare the relative risks (RR) of hearing impairment due to co-exposure of continuous noise, impulse noise, metal ototoxicants, and organic solvent ototoxicants using several pure tone audiometry (PTA) evaluation methods.
Design: Noise and ototoxicant exposure and PTA records were extracted from a DoD longitudinal repository and were analyzed for U.
S. Air Force personnel (n = 2372) at a depot-level aircraft maintenance activity at Tinker Air Force Base, Oklahoma using an historical cohort study design.
Eight similar exposure groups based on combinations of ototoxicant and noise exposure were created: (1) Continuous noise (reference group); (2) Continuous noise + Impulse noise; (3) Metal exposure + Continuous noise; (4) Metal exposure + Continuous noise + Impulse noise; (5) Solvent exposure + Continuous noise; (6) Solvent exposure + Continuous noise + Impulse noise; (7) Metal exposure + Solvent exposure + Continuous noise; and (8) Metal exposure + Solvent exposure + Continuous noise + Impulse noise. RR of hearing impairment compared to the Continuous noise-exposed reference group was assessed with five PTA evaluation methods including (1) U.
S. Department of Defense (DoD) Significant Threshold Shift (STS), (2) Occupational Safety and Health Administration (OSHA) age-adjusted STS, (3) National Institute for Occupational Safety and Health (NIOSH) STS, (4) NIOSH Material Hearing Impairment, and (5) All Frequency Threshold Average.
Results: Hearing impairment was significantly worse for SEG (2) combined exposure to continuous noise and impulse noise only for the PTA evaluation method (2) OSHA Age Adjusted with an RR of 3.11, 95% confidence interval (CI), 1.16–8.31 and was nearly significantly different using PTA evaluation method (4) NIOSH Material Hearing Impairment with an RR of 3.16 (95% CI, 0.99–10.15). Despite no significant differences for SEGs with an ototoxicant exposure, PTA evaluation method (3) NIOSH STS was most sensitive in detecting hearing changes for SEG (8) Metal exposure + Solvent exposure + Continuous noise + Impulse noise as demonstrated by a RR of 1.12 (95% CI, 0.99–1.27).
Conclusions: Results suggest that a single PTA evaluation technique may not be adequate in fully revealing hearing impairment risk due to all stressors and tailoring the PTA evaluation technique to the hazards present in the workplace could better detect hearing impairment. Additionally, results suggest that PTA may not be effective as the sole technique for evaluating hearing impairment due to ototoxicant exposure with continuous noise co-exposure.

Pubmed PDF Web

The Effect of Advanced Age on the Electrode-Neuron Interface in Cochlear Implant Users

Skidmore, Jeffrey; Carter, Brittney L.; Riggs, William J.; He, Shuman

Publication date 01-07-2022


Objectives: This study aimed to determine the effect of advanced age on how effectively a cochlear implant (CI) electrode stimulates the targeted cochlear nerve fibers (i.e., the electrode-neuron interface ENI) in postlingually deafened adult CI users. The study tested the hypothesis that the quality of the ENI declined with advanced age. It also tested the hypothesis that the effect of advanced age on the quality of the ENI would be greater in basal regions of the cochlea compared to apical regions.
Design: Study participants included 40 postlingually deafened adult CI users. The participants were separated into two age groups based on age at testing in accordance with age classification terms used by the World Health Organization and the Medical Literature Analysis and Retrieval System Online bibliographic database. The middle-aged group included 16 participants between the ages of 45 and 64 years and the elderly group included 24 participants older than 65 years. Results were included from one ear for each participant. All participants used Cochlear Nucleus CIs in their test ears. For each participant, electrophysiological measures of the electrically evoked compound action potential (eCAP) were used to measure refractory recovery functions and amplitude growth functions (AGFs) at three to seven electrode sites across the electrode array. The eCAP parameters used in this study included the refractory recovery time estimated based on the eCAP refractory recovery function, the eCAP threshold, the slope of the eCAP AGF, and the negative-peak (i.e., N1) latency. The electrode-specific ENI was evaluated using an optimized combination of the eCAP parameters that represented the responsiveness of cochlear nerve fibers to electrical stimulation delivered by individual electrodes along the electrode array. The quality of the electrode-specific ENI was quantified by the local ENI index, a value between 0 and 100 where 0 and 100 represented the lowest- and the highest-quality ENI across all participants and electrodes in the study dataset, respectively.
Results: There were no significant age group differences in refractory times, eCAP thresholds, N1 latencies or local ENI indices. Slopes of the eCAP AGF were significantly larger in the middle-aged group compared to the elderly group. There was a significant effect of electrode location on each eCAP parameter, except for N1 latency. In addition, the local ENI index was significantly larger (i.e., better ENI) in the apical region than in the basal and middle regions of the cochlea for both age groups.
Conclusions: The model developed in this study can be used to estimate the quality of the ENI at individual electrode locations in CI users. The quality of the ENI is affected by the location of the electrode along the length of the cochlea. The method for estimating the quality of the ENI developed in this study holds promise for identifying electrodes with poor ENIs that could be deactivated from the clinical programming map. The ENI is not strongly affected by advanced age in middle-aged and elderly CI users.

Pubmed PDF Web

Bilinguals Show Proportionally Greater Benefit From Visual Speech Cues and Sentence Context in Their Second Compared to Their First Language

Chauvin, Alexandre; Phillips, Natalie A.

Publication date 01-07-2022


Objectives: Speech perception in noise is challenging, but evidence suggests that it may be facilitated by visual speech cues (e.g., lip movements) and supportive sentence context in native speakers. Comparatively few studies have investigated speech perception in noise in bilinguals, and little is known about the impact of visual speech cues and supportive sentence context in a first language compared to a second language within the same individual. The current study addresses this gap by directly investigating the extent to which bilinguals benefit from visual speech cues and supportive sentence context under similarly noisy conditions in their first and second language.
Design: Thirty young adult English–French/French–English bilinguals were recruited from the undergraduate psychology program at Concordia University and from the Montreal community. They completed a speech perception in noise task during which they were presented with video-recorded sentences and instructed to repeat the last word of each sentence out loud.
Sentences were presented in three different modalities: visual-only, auditory-only, and audiovisual. Additionally, sentences had one of two levels of context: moderate (e.g., “In the woods, the hiker saw a bear.”) and low (e.g., “I had not thought about that bear.”). Each participant completed this task in both their first and second language; crucially, the level of background noise was calibrated individually for each participant and was the same throughout the first language and second language (L2) portions of the experimental task.
Results: Overall, speech perception in noise was more accurate in bilinguals’ first language compared to the second. However, participants benefited from visual speech cues and supportive sentence context to a proportionally greater extent in their second language compared to their first. At the individual level, performance during the speech perception in noise task was related to aspects of bilinguals’ experience in their second language (i.e., age of acquisition, relative balance between the first and the second language).
Conclusions: Bilinguals benefit from visual speech cues and sentence context in their second language during speech in noise and do so to a greater extent than in their first language given the same level of background noise. Together, this indicates that L2 speech perception can be conceptualized within an inverse effectiveness hypothesis framework with a complex interplay of sensory factors (i.e., the quality of the auditory speech signal and visual speech cues) and linguistic factors (i.e., presence or absence of supportive context and L2 experience of the listener).

Pubmed PDF Web

Sensitivity of Vowel-Evoked Envelope Following Responses to Spectra and Level of Preceding Phoneme Context

Easwar, Vijayalakshmi; Boothalingam, Sriram; Wilson, Emily

Publication date 01-07-2022


Objective: Vowel-evoked envelope following responses (EFRs) could be a useful noninvasive tool for evaluating neural activity phase-locked to the fundamental frequency of voice (f0). Vowel-evoked EFRs are often elicited by vowels in consonant-vowel syllables or words. Considering neural activity is susceptible to temporal masking, EFR characteristics elicited by the same vowel may vary with the features of the preceding phoneme. To this end, the objective of the present study was to evaluate the influence of the spectral and level characteristics of the preceding phoneme context on vowel-evoked EFRs.
Design: EFRs were elicited by a male-spoken /i/ (stimulus; duration = 350 msec), modified to elicit two EFRs, one from the region of the first formant (F1) and one from the second and higher formants (F2+). The stimulus, presented at 65 dB SPL, was preceded by one of the four contexts: /∫/, /m/, /i/ or a silent gap of duration equal to that of the stimulus. The level of the context phonemes was either 50 or 80 dB SPL, 15 dB lower and higher than the level of the stimulus /i/. In a control condition, EFRs to the stimulus /i/ were elicited in isolation without any preceding phoneme contexts. The stimulus and the contexts were presented monaurally to a randomly chosen test ear in 21 young adults with normal hearing. EFRs were recorded using single-channel electroencephalogram between the vertex and the nape.
Results: A repeated measures analysis of variance indicated a significant three-way interaction between context type (/∫/, /i/, /m/, silent gap), level (50, 80 dB SPL), and EFR-eliciting formant (F1, F2+). Post hoc analyses indicated no influence of the preceding phoneme context on F1-elicited EFRs. Relative to a silent gap as the preceding context, F2+-elicited EFRs were attenuated by /∫/ and /m/ presented at 50 and 80 dB SPL, as well as by /i/ presented at 80 dB SPL. The average attenuation ranged from 14.9 to 27.9 nV. When the context phonemes were presented at matched levels of 50 or 80 dB SPL, F2+-elicited EFRs were most often attenuated when preceded by /∫/. At 80 dB SPL, relative to the silent preceding gap, the average attenuation was 15.7 nV, and at 50 dB SPL, relative to the preceding context phoneme /i/, the average attenuation was 17.2 nV.
Conclusion: EFRs elicited by the second and higher formants of /i/ are sensitive to the spectral and level characteristics of the preceding phoneme context. Such sensitivity, measured as an attenuation in the present study, may influence the comparison of EFRs elicited by the same vowel in different consonant-vowel syllables or words. However, the degree of attenuation with realistic context levels exceeded the minimum measurable change only 12% of the time. Although the impact of the preceding context is statistically significant, it is likely to be clinically insignificant a majority of the time.

Pubmed PDF Web

Pitch Accuracy of Vocal Singing in Deaf Children With Bimodal Hearing and Bilateral Cochlear Implants

Xu, Li; Yang, Jing; Hahn, Emily; Uchanski, Rosalie; Davidson, Lisa

Publication date 01-07-2022


Objectives: The purpose of the present study was to investigate the pitch accuracy of vocal singing in children with severe to profound hearing loss who use bilateral cochlear implants (CIs) or bimodal devices CI at one ear and hearing aid (HA) at the other in comparison to similarly-aged children with normal-hearing (NH).
Design: The participants included four groups: (1) 26 children with NH, (2) 13 children with bimodal devices, (3) 31 children with bilateral CIs that were implanted sequentially, and (4) 10 children with bilateral CIs that were implanted simultaneously. All participants were aged between 7 and 11 years old. Each participant was recorded singing a self-chosen song that was familiar to him or her. The fundamental frequencies (F0) of individual sung notes were extracted and normalized to facilitate cross-subject comparisons. Pitch accuracy was quantified using four pitch-based metrics calculated with reference to the target music notes: mean note deviation, contour direction, mean interval deviation, and F0 variance ratio. A one-way ANOVA was used to compare listener-group difference on each pitch metric. A principal component analysis showed that the mean note deviation best accounted for pitch accuracy in vocal singing. A regression analysis examined potential predictors of CI children’s singing proficiency using mean note deviation as the dependent variable and demographic and audiological factors as independent variables.
Results: The results revealed significantly poorer performance on all four pitch-based metrics in the three groups of children with CIs in comparison to children with NH. No significant differences were found among the three CI groups. Among the children with CIs, variability in the vocal singing proficiency was large. Within the group of 13 bimodal users, the mean note deviation was significantly correlated with their unaided pure-tone average thresholds (r = 0.582, p = 0.037). The regression analysis for all children with CIs, however, revealed no significant demographic or audiological predictor for their vocal singing performance.
Conclusion: Vocal singing performance in children with bilateral CIs or bimodal devices is not significantly different from each other on a group level. Compared to children with NH, the pediatric bimodal and bilateral CI users, in general, demonstrated significant deficits in vocal singing ability. Demographic and audiological factors, known from previous studies to be associated with good speech and language development in prelingually-deafened children with CIs, were not associated with singing accuracy for these children.

Pubmed PDF Web

Prelinguistic Consonant Production and the Influence of Mouthing Before and After Cochlear Implantation

Fagan, Mary K.; Vu, Minh-Chau

Publication date 01-07-2022


Objectives: The goal of the study was to investigate prelinguistic consonant production and the influence of vocalizations that co-occurred with object mouthing on consonant production in infants with profound sensorineural hearing loss before and after cochlear implantation to advance knowledge of early speech development in infants with profound hearing loss.
Design: Participants were 43 infants, 16 infants with profound sensorineural hearing loss and 27 hearing infants. In the mixed longitudinal and cross-sectional design, infants with profound hearing loss and age-matched hearing infants participated before cochlear implantation, at an average age of 9.9 mo, and/or after cochlear implantation, at an average age of 17.8 mo. Mean age at cochlear implantation for infants with profound hearing loss was 12.4 mo; mean duration of cochlear implant use at time of testing was 4.2 mo.
Results: Before and after cochlear implantation, infants with profound hearing loss produced significantly fewer supraglottal consonants per consonant-vowel vocalization than hearing peers and had smaller overall consonant inventories. Before, but not after cochlear implantation, infants with profound hearing loss produced proportionally more vocalizations, supraglottal consonant-vowel vocalizations, and different supraglottal consonants in vocalizations during mouthing than did hearing infants.
Conclusions: The results document consonant production before cochlear implantation in a larger group of infants with profound hearing loss than previously examined. The results also extend evidence of early delays in consonant production to infants who received cochlear implants at 12 mo of age, and show that they likely miss the potential benefits of auditory-motor feedback in vocalization-mouthing combinations that occur before they have access to sound through cochlear implants.

Pubmed PDF Web

Phonological Priming as a Lens for Phonological Organization in Children With Cochlear Implants

Lund, Emily

Publication date 01-07-2022


Objectives: To evaluate the subconscious knowledge of between-word phonological similarities in children with cochlear implants as compared with children with typical hearing.
Design: Participants included 30 children with cochlear implants between the ages of five and seven who used primarily spoken English to communicate, 30 children matched for chronological age, and 30 children matched for vocabulary size. Participants completed an animacy judgment task in either a (a) neutral condition, (b) a phonological prime condition where the consonant and vowel onset of the pictured word was presented prior to the visual target’s appearance, (c) an inhibition prime condition where a consonant and vowel onset not matching the pictured word was presented prior to the target’s appearance. Reaction times were recorded.
Results: Children with cochlear implants reacted differently and more slowly than children with typical hearing in both groups to the primes: children with typical hearing experienced a phonological facilitation effect in the phonological prime condition, whereas children with cochlear implants did not. Children with cochlear implants also had reaction times that, overall, were slower than children matched for chronological age but similar to children matched for vocabulary size.
Conclusions: The different experience of children with cochlear implants with phonological facilitation and inhibition effects may indicate children with cochlear implants have phonological organization strategies that are different from those of children with typical hearing.

Pubmed PDF Web

The Impact of Occupational Noise Exposure on Hyperacusis: a Longitudinal Population Study of Female Workers in Sweden

Fredriksson, Sofie; Hussain-Alkhateeb, Laith; Torén, Kjell; Sjöström, Mattias; Selander, Jenny; Gustavsson, Per; Kähäri, Kim; Magnusson, Lennart; Persson Waye, Kerstin

Publication date 01-07-2022


Objectives: The aim was to assess the risk of hyperacusis in relation to occupational noise exposure among female workers in general, and among women working in preschool specifically.
Design: A retrospective longitudinal study was performed.
Survey data were collected in 2013 and 2014 from two cohorts: randomly selected women from the population in region Västra Götaland, Sweden, and women selected based on having received a preschool teacher degree from universities in the same region. The final study sample included n = 8328 women born between 1948 and 1989. Occupational noise exposure was objectively assigned to all time periods from the first to the last reported occupation throughout working life, using the Swedish Job-Exposure Matrix (JEM) with three exposure intervals: 85 dB(A). The JEM assigns preschool teachers to the 75 to 85 dB(A) exposure interval. The outcome hyperacusis was assessed by self-report using one question addressing discomfort or pain from everyday sounds. In the main analysis, a hyperacusis event was defined by the reported year of onset, if reported to occur at least a few times each week.
Additional sensitivity analyses were performed using more strict definitions: (a) at least several times each week and (b) every day. The risk (hazard ratio, HR) of hyperacusis was analyzed in relation to years of occupational noise exposure, using survival analysis with frailty regression modeling accounting for individual variation in survival times which reflect, for example, noise exposure during years prior to onset. Occupational noise exposure was defined by the occupation held at year of hyperacusis onset, or the occupation held at the survey year if no event occurred. Models were adjusted for confounders including age, education, income, family history of hearing loss, and change of jobs due to noise.
Results: In total, n = 1966 hyperacusis events between 1960 and 2014 were analyzed in the main analysis. A significantly increased risk of hyperacusis was found among women working in any occupation assigned to the 75 to 85 dB(A) noise exposure group HR: 2.6, 95% confidence interval (CI): 2.4–2.9, compared with the reference group 85 dB(A), where only six hyperacusis events were identified (HR: 1.4, 95% CI: 0.6–3.1). In the sensitivity analysis, where hyperacusis was defined as occurring every day, the HR was significant also in the highest exposure group (HR: 3.8, 95% CI: 1.4–10.3), and generally slightly higher in the other exposure groups compared to the main analysis.
Conclusions: This study indicates increased risk of hyperacusis already below the permissible occupational noise exposure limit in Sweden (85 dB LAeq,8h) among female workers in general, and in particular among preschool teachers. Prospective studies and less wide exposure intervals could confirm causal effects and assess dose–response relationships, respectively, although this study at present suggest a need for risk assessment, improved hearing prevention measures, and noise abatement measures in occupations with noise levels from 75 dB(A). The results could also have implications for management of occupational disability claims.

Pubmed PDF Web

More Than Words: the Relative Roles of Prosody and Semantics in the Perception of Emotions in Spoken Language by Postlingual Cochlear Implant Users

Taitelbaum-Swead, Riki; Icht, Michal; Ben-David, Boaz M.

Publication date 01-07-2022


Objectives: The processing of emotional speech calls for the perception and integration of semantic and prosodic cues. Although cochlear implants allow for significant auditory improvements, they are limited in the transmission of spectro-temporal fine-structure information that may not support the processing of voice pitch cues. The goal of the current study is to compare the performance of postlingual cochlear implant (CI) users and a matched control group on perception, selective attention, and integration of emotional semantics and prosody.
Design: Fifteen CI users and 15 normal hearing (NH) peers (age range, 18–65 years) 1istened to spoken sentences composed of different combinations of four discrete emotions (anger, happiness, sadness, and neutrality) presented in prosodic and semantic channels—T-RES: Test for Rating Emotions in Speech. In three separate tasks, listeners were asked to attend to the sentence as a whole, thus integrating both speech channels (integration), or to focus on one channel only (rating of target emotion) and ignore the other (selective attention). Their task was to rate how much they agreed that the sentence conveyed each of the predefined emotions. In addition, all participants performed standard tests of speech perception.
Results: When asked to focus on one channel, semantics or prosody, both groups rated emotions similarly with comparable levels of selective attention. When the task was called for channel integration, group differences were found. CI users appeared to use semantic emotional information more than did their NH peers. CI users assigned higher ratings than did their NH peers to sentences that did not present the target emotion, indicating some degree of confusion. In addition, for CI users, individual differences in speech comprehension over the phone and identification of intonation were significantly related to emotional semantic and prosodic ratings, respectively.
Conclusions: CI users and NH controls did not differ in perception of prosodic and semantic emotions and in auditory selective attention. However, when the task called for integration of prosody and semantics, CI users overused the semantic information (as compared with NH). We suggest that as CI users adopt diverse cue weighting strategies with device experience, their weighting of prosody and semantics differs from those used by NH. Finally, CI users may benefit from rehabilitation strategies that strengthen perception of prosodic information to better understand emotional speech.

Pubmed PDF Web

Better hearing in Norway. A comparison of two HUNT cohorts 20 years apart.: Erratum

Publication date 01-07-2022


No abstract available

Pubmed PDF Web

Copyright © KNO-T, 2020 | R/Abma