Journal of the Association for Research in Otolaryngology 2021-07-16

Correction to: An Alternative Explanation for Difficulties with Speech in Background Talkers: Abnormal Fusion of Vowels Across Fundamental Frequency and Ears

Publication date 16-07-2021


A correction to this paper has been published: https://doi.org/10.1007/s10162-021-00802-6

Pubmed PDF Web

Aging Effects on Cortical Responses to Tones and Speech in Adult Cochlear-Implant Users

Z Xie,O Stakhovskaya,MJ Goupell,S Anderson

Publication date 06-07-2021


Age-related declines in auditory temporal processing contribute to speech understanding difficulties of older adults. These temporal processing deficits have been established primarily among acoustic-hearing listeners, but the peripheral and central contributions are difficult to separate. This study recorded cortical auditory evoked potentials from younger to middle-aged (< 65 years) and older (≥ 65 years) cochlear-implant (CI) listeners to assess age-related changes in temporal processing, where cochlear processing is bypassed in this population. Aging effects were compared to age-matched normal-hearing (NH) listeners. Advancing age was associated with prolonged P2 latencies in both CI and NH listeners in response to a 1000-Hz tone or a syllable /da/, and with prolonged N1 latencies in CI listeners in response to the syllable. Advancing age was associated with larger N1 amplitudes in NH listeners. These age-related changes in latency and amplitude were independent of stimulus presentation rate. Further, CI listeners exhibited prolonged N1 and P2 latencies and smaller P2 amplitudes than NH listeners. Thus, aging appears to degrade some aspects of auditory temporal processing when peripheral-cochlear contributions are largely removed, suggesting that changes beyond the cochlea may contribute to age-related temporal processing deficits.

Pubmed PDF Web

How Zebrafish Can Drive the Future of Genetic-based Hearing and Balance Research

L Sheets,M Holmgren,KS Kindt

Publication date 01-06-2021


Over the last several decades, studies in humans and animal models have successfully identified numerous molecules required for hearing and balance. Many of these studies relied on unbiased forward genetic screens based on behavior or morphology to identify these molecules. Alongside forward genetic screens, reverse genetics has further driven the exploration of candidate molecules. This review provides an overview of the genetic studies that have established zebrafish as a genetic model for hearing and balance research. Further, we discuss how the unique advantages of zebrafish can be leveraged in future genetic studies. We explore strategies to design novel forward genetic screens based on morphological alterations using transgenic lines or behavioral changes following mechanical or acoustic damage. We also outline how recent advances in CRISPR-Cas9 can be applied to perform reverse genetic screens to validate large sequencing datasets. Overall, this review describes how future genetic studies in zebrafish can continue to advance our understanding of inherited and acquired hearing and balance disorders.

Pubmed PDF Web

Forward and Reverse Middle Ear Transmission in Gerbil with a Normal or Spontaneously Healed Tympanic Membrane

X Lin,SWF Meenderink,G Stomackin,TT Jung,GK Martin,W Dong

Publication date 01-06-2021


Tympanic membranes (TM) that have healed spontaneously after perforation present abnormalities in their structural and mechanical properties; i.e., they are thickened and abnormally dense. These changes result in a deterioration of middle ear (ME) sound transmission, which is clinically presented as a conductive hearing loss (CHL). To fully understand the ME sound transmission under TM pathological conditions, we created a gerbil model with a controlled 50% pars tensa perforation, which was left to heal spontaneously for up to 4 weeks (TM perforations had fully sealed after 2 weeks). After the recovery period, the ME sound transmission, both in the forward and reverse directions, was directly measured with two-tone stimulation. Measurements were performed at the input, the ossicular chain, and output of the ME system, i.e., at the TM, umbo, and scala vestibuli (SV) next to the stapes. We found that variations in ME transmission in forward and reverse directions were not symmetric. In the forward direction, the ME pressure gain decreased in a frequency-dependent manner, with smaller loss (within 10 dB) at low frequencies and more dramatic loss at high frequency regions. The loss pattern was mainly from the less efficient acoustical to mechanical coupling between the TM and umbo, with little changes along the ossicular chain. In the reverse direction, the variations in these ears are relatively smaller. Our results provide detailed functional observations that explain CHL seen in clinical patients with abnormal TM, e.g., caused by otitis media, that have healed spontaneously after perforation or post-tympanoplasty, especially at high frequencies. In addition, our data demonstrate that changes in distortion product otoacoustic emissions (DPOAEs) result from altered ME transmission in both the forward and reverse direction by a reduction of the effective stimulus levels and less efficient transfer of DPs from the ME into the ear canal. This confirms that DPOAEs can be used to assess both the health of the cochlea and the middle ear.

Pubmed PDF Web

Speech Perception with Noise Vocoding and Background Noise: An EEG and Behavioral Study

Y Dong,Y Gai

Publication date 01-06-2021


This study explored the physiological response of the human brain to degraded speech syllables. The degradation was introduced using noise vocoding and/or background noise. The goal was to identify physiological features of auditory-evoked potentials (AEPs) that may explain speech intelligibility. Ten human subjects with normal hearing participated in syllable-detection tasks, while their AEPs were recorded with 32-channel electroencephalography. Subjects were presented with six syllables in the form of consonant-vowel-consonant or vowel-consonant-vowel. Noise vocoding with 22 or 4 frequency channels was applied to the syllables. When examining the peak heights in the AEPs (P1, N1, and P2), vocoding alone showed no consistent effect. P1 was not consistently reduced by background noise, N1 was sometimes reduced by noise, and P2 was almost always highly reduced.
Two other physiological metrics were examined: (1) classification accuracy of the syllables based on AEPs, which indicated whether AEPs were distinguishable for different syllables, and (2) cross-condition correlation of AEPs (rcc) between the clean and degraded speech, which indicated the brain’s ability to extract speech-related features and suppress response to noise. Both metrics decreased with degraded speech quality. We further tested if the two metrics can explain cross-subject variations in their behavioral performance. A significant correlation existed for rcc, as well as classification based on early AEPs, in the fronto-central areas. Because rcc indicates similarities between clean and degraded speech, our finding suggests that high speech intelligibility may be a result of the brain’s ability to ignore noise in the sound carrier and/or background.

Pubmed PDF Web

Development of Auditory Cortex Circuits

M Chang,PO Kanold

Publication date 01-06-2021


The ability to process and perceive sensory stimuli is an essential function for animals. Among the sensory modalities, audition is crucial for communication, pleasure, care for the young, and perceiving threats. The auditory cortex (ACtx) is a key sound processing region that combines ascending signals from the auditory periphery and inputs from other sensory and non-sensory regions. The development of ACtx is a protracted process starting prenatally and requires the complex interplay of molecular programs, spontaneous activity, and sensory experience. Here, we review the development of thalamic and cortical auditory circuits during pre- and early post-natal periods.

Pubmed PDF Web

Auditory Brainstem Models: Adapting Cochlear Nuclei Improve Spatial Encoding by the Medial Superior Olive in Reverberation

A Brughera,J Mikiel-Hunter,M Dietz,D McAlpine

Publication date 01-06-2021


Listeners typically perceive a sound as originating from the direction of its source, even as direct sound is followed milliseconds later by reflected sound from multiple different directions. Early-arriving sound is emphasised in the ascending auditory pathway, including the medial superior olive (MSO) where binaural neurons encode the interaural-time-difference (ITD) cue for spatial location. Perceptually, weighting of ITD conveyed during rising sound energy is stronger at 600 Hz than at 200 Hz, consistent with the minimum stimulus rate for binaural adaptation, and with the longer reverberation times at 600 Hz, compared with 200 Hz, in many natural outdoor environments. Here, we computationally explore the combined efficacy of adaptation prior to the binaural encoding of ITD cues, and excitatory binaural coincidence detection within MSO neurons, in emphasising ITDs conveyed in early-arriving sound. With excitatory inputs from adapting, nonlinear model spherical bushy cells (SBCs) of the bilateral cochlear nuclei, a nonlinear model MSO neuron with low-threshold potassium channels reproduces the rate-dependent emphasis of rising vs. peak sound energy in ITD encoding; adaptation is equally effective in the model MSO. Maintaining adaptation in model SBCs, and adjusting membrane speed in model MSO neurons, ‘left’ and ‘right’ populations of computationally efficient, linear model SBCs and MSO neurons reproduce this stronger weighting of ITD conveyed during rising sound energy at 600 Hz compared to 200 Hz. This hemispheric population model demonstrates a link between strong weighting of spatial information during rising sound energy, and correct unambiguous lateralisation of a speech source in reverberation.

Pubmed PDF Web

Rate and Temporal Coding of Regular and Irregular Pulse Trains in Auditory Midbrain of Normal-Hearing and Cochlear-Implanted Rabbits

Y Su,Y Chung,DFM Goodman,KE Hancock,B Delgutte

Publication date 01-06-2021


Although pitch is closely related to temporal periodicity, stimuli with a degree of temporal irregularity can evoke a pitch sensation in human listeners. However, the neural mechanisms underlying pitch perception for irregular sounds are poorly understood. Here, we recorded responses of single units in the inferior colliculus (IC) of normal hearing (NH) rabbits to acoustic pulse trains with different amounts of random jitter in the inter-pulse intervals and compared with responses to electric pulse trains delivered through a cochlear implant (CI) in a different group of rabbits. In both NH and CI animals, many IC neurons demonstrated tuning of firing rate to the average pulse rate (APR) that was robust against temporal jitter, although jitter tended to increase the firing rates for APRs ≥ 1280 Hz. Strength and limiting frequency of spike synchronization to stimulus pulses were also comparable between periodic and irregular pulse trains, although there was a slight increase in synchronization at high APRs with CI stimulation. There were clear differences between CI and NH animals in both the range of APRs over which firing rate tuning was observed and the prevalence of synchronized responses. These results suggest that the pitches of regular and irregular pulse trains are coded differently by IC neurons depending on the APR, the degree of irregularity, and the mode of stimulation. In particular, the temporal pitch produced by periodic pulse trains lacking spectral cues may be based on a rate code rather than a temporal code at higher APRs.

Pubmed PDF Web

Examining the Factors that Contribute to Non-Monotonic Growth of the $$2f_1 - f_2$$\n \n \n 2\n \n f\n 1\n \n -\n \n f\n 2 \xa0Otoacoustic Emission in Humans

ML Mills,Y Shen,RH Withnell

Publication date 01-06-2021


Cubic distortion product otoacoustic emission input–output functions in humans show a complex pattern of growth. To further investigate the growth of the \(2f_1-f_2\) otoacoustic emission, magnitude and phase input–output functions were obtained from human subjects using a range of stimulus levels, frequencies, and frequency ratios. Three factors related to cochlear nonlinearity may produce non-monotonic input–output functions: a two-component interaction, an operating point shift, and two-tone suppression. To complement data interpretation, a local model of distortion product otoacoustic emission generation was fit to the magnitude spectrum of the averaged ear canal sound pressure recording to quantify operating point shift. Results obtained are consistent with non-monotonic growth occurring primarily as a result of two-tone suppression and/or a two-component interaction. These two mechanisms are expected to operate at different stimulus levels, with different signature magnitude and phase patterns, and are unlikely to overlap in producing non-monotonic growth. An operating point shift was suggested in three cases. These results support multiple factors contributing to the complexity of growth of the \(2f_1-f_2\) otoacoustic emission in humans and highlight the importance of looking at phase in addition to magnitude when interpreting distortion product otoacoustic emission growth.

Pubmed PDF Web

Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review

C Opoku-Baah,AM Schoenhaut,SG Vassall,DA Tovar,R Ramachandran,MT Wallace

Publication date 20-05-2021


In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision’s influence in audition, making the distinction between vision’s ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision’s ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the Mc Gurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception—scene analysis and communication.
The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.

Pubmed PDF Web

Effects of Several Therapeutic Agents on Mammalian Vestibular Function: Meclizine, Diazepam, and JNJ7777120

C Lee,TA Jones

Publication date 19-05-2021


Management of vestibular dysfunction may include treatment with medications that are thought to act to suppress vestibular function and reduce or eliminate abnormal sensitivity to head motions. The extent to which vestibular medications act centrally or peripherally is still debated. In this study, two commonly prescribed medications, meclizine and diazepam, and a candidate for future clinical use, JNJ7777120, were evaluated for their effects on short latency compound action potentials generated by the peripheral vestibular system and corresponding central neural relays (i.e., vestibular sensory-evoked potentials, VsEPs). The effects of the selected drugs developed slowly over the course of two hours in the mouse. Findings indicate that meclizine (600 mg/kg) and diazepam (> 60 mg/kg) can act on peripheral elements of the vestibular maculae whereas diazepam also acts most effectively on central gravity receptor circuits to exert its suppressive effects. The novel pharmacological agent JNJ7777120 (160 mg/kg) acts in the vestibular periphery to enhance macular responses to transient stimuli (VsEPs) while, hypothetically, suppressing macular responses to sustained or slowly changing stimuli.

Pubmed PDF Web

Otoconia Structure After Short- and Long-Duration Exposure to Altered Gravity

R Boyle,J Varelas

Publication date 18-05-2021


Vertebrates use weight-lending otoconia in the inner ear otolith organs to enable detection of their translation during self or imposed movements and a change in their orientation with respect to gravity. In spaceflight, otoconia are near weightless. It has been hypothesized that otoconia undergo structural remodeling after exposure to weightlessness to restore normal sensation. A structural remodeling is reasoned to occur for hypergravity but in the opposite sense. We explored these hypotheses in several strains of mice within a Biospecimen Sharing Program in separate space- and ground-based projects. Mice were housed 90 days on the International Space Station, 13 days on two Shuttle Orbiter missions, or exposed to 90 days of hindlimb unloading or net 2.38 g via centrifugation. Corresponding flight habitat and standard cage vivarium controls were used. Utricular otoliths were visually analyzed using scanning electron microscopy and in selected samples before and after focused ion beam (FIB) milling. Results suggest a possible mass addition to the otoconia outer shell might occur after exposure to longer-duration spaceflight, but not short ones or hindlimb unloading.
A destructive process is clearly seen after centrifugation: an ablation or thinning of the outer shell and cavitation of the inner core. This study provides a purely descriptive account of otoconia remodeling after exposures to altered gravity. The mechanism(s) underlying these processes must be identified and quantitatively validated to develop countermeasures to altered gravity levels during exploration missions.

Pubmed PDF Web

Hearing Impairment and Cognition in an Aging World

DS Powell,ES Oh,FR Lin,JA Deal

Publication date 18-05-2021


With the increasing number of older adults around the world, the overall number of dementia cases is expected to rise dramatically in the next 40 years. In 2020, nearly 6 million individuals in the USA were living with Alzheimer’s disease, the most common type of dementia, with anticipated growth to nearly 14 million by year 2050. This increasing prevalence, coupled with high societal burden, makes prevention and intervention of dementia a medical and public health priority. As clinicians and researchers, we will continue to see more individuals with hearing loss with other comorbidities including dementia. Epidemiologic evidence suggests an association between hearing loss and increased risk of dementia, presenting opportunity for targeted intervention for hearing loss to play a fundamental role in dementia prevention. In this discussion, we summarize current research on the association between hearing loss and dementia and review potential casual mechanisms behind the association (e.g., sensory-deprivation hypothesis, information-degradation hypothesis, common cause). We emphasize key areas of research which might best inform our investigation of this potential casual association. These selected research priorities include examination of the causal mechanism, measurement of co-existing hearing loss and cognitive impairment, and potential of aural rehabilitation. Addressing these research gaps and how results are then translated for clinical use is paramount for dementia prevention and overall health of older adults.

Pubmed PDF Web

Reweighting of Binaural Localization Cues Induced by Lateralization Training

M Klingel,N Kopčo,B Laback

Publication date 06-05-2021


Normal-hearing listeners adapt to alterations in sound localization cues. This adaptation can result from the establishment of a new spatial map of the altered cues or from a stronger relative weighting of unaltered compared to altered cues. Such reweighting has been shown for monaural vs. binaural cues. However, studies attempting to reweight the two binaural cues, interaural differences in time (ITD) and level (ILD), yielded inconclusive results. This study investigated whether binaural-cue reweighting can be induced by lateralization training in a virtual audio-visual environment. Twenty normal-hearing participants, divided into two groups, completed the experiment consisting of 7 days of lateralization training, preceded and followed by a test measuring the binaural-cue weights. Participants’ task was to lateralize 500-ms bandpass-filtered (2–4 k Hz) noise bursts containing various combinations of spatially consistent and inconsistent binaural cues. During training, additional visual cues reinforced the azimuth corresponding to ITDs in one group and ILDs in the other group and the azimuthal ranges of the binaural cues were manipulated group-specifically. Both groups showed a significant increase of the reinforced-cue weight from pre- to posttest, suggesting that participants reweighted the binaural cues in the expected direction. This reweighting occurred within the first training session. The results are relevant as binaural-cue reweighting likely occurs when normal-hearing listeners adapt to new acoustic environments. Reweighting might also be a factor underlying the low contribution of ITDs to sound localization of cochlear-implant listeners as they typically do not experience reliable ITD cues with clinical devices.

Pubmed PDF Web

The Panoramic ECAP Method: Estimating Patient-Specific Patterns of Current Spread and Neural Health in Cochlear Implant Users

C Garcia,T Goehring,S Cosentino,RE Turner,JM Deeks,T Brochier,T Rughooputh,M Bance,RP Carlyon

Publication date 23-04-2021


The knowledge of patient-specific neural excitation patterns from cochlear implants (CIs) can provide important information for optimizing efficacy and improving speech perception outcomes. The Panoramic ECAP (‘PECAP’) method (Cosentino et al. 2015) uses forward-masked electrically evoked compound action-potentials (ECAPs) to estimate neural activation patterns of CI stimulation. The algorithm requires ECAPs be measured for all combinations of probe and masker electrodes, exploiting the fact that ECAP amplitudes reflect the overlapping excitatory areas of both probes and maskers. Here we present an improved version of the PECAP algorithm that imposes biologically realistic constraints on the solution, that, unlike the previous version, produces detailed estimates of neural activation patterns by modelling current spread and neural health along the intracochlear electrode array and is capable of identifying multiple regions of poor neural health.
The algorithm was evaluated for reliability and accuracy in three ways: (1) computer-simulated current-spread and neural-health scenarios, (2) comparisons to psychophysical correlates of neural health and electrode-modiolus distances in human CI users, and (3) detection of simulated neural ‘dead’ regions (using forward masking) in human CI users. The PECAP algorithm reliably estimated the computer-simulated scenarios. A moderate but significant negative correlation between focused thresholds and the algorithm’s neural-health estimates was found, consistent with previous literature. It also correctly identified simulated ‘dead’ regions in all seven CI users evaluated. The revised PECAP algorithm provides an estimate of neural excitation patterns in CIs that could be used to inform and optimize CI stimulation strategies for individual patients in clinical settings.

Pubmed PDF Web

Phosphorylation of MYL12 by Myosin Light Chain Kinase Regulates Cellular Shape Changes in Cochlear Hair Cells

R Oya,O Tsukamoto,T Sato,H Kato,K Matsuoka,K Oshima,T Kamakura,Y Ohta,T Imai,S Takashima,H Inohara

Publication date 20-04-2021


The organ of Corti is an auditory organ located in the cochlea, comprising hair cells (HCs) and other supporting cells. Cellular shape changes of HCs are important for the development of auditory epithelia and hearing function. It was previously observed that HCs and inner sulcus cells (ISCs) demonstrate cellular shape changes similar to the apical constriction of the neural epithelia. Apical constriction is induced via actomyosin cable contraction in the apical junctional complex and necessary for the physiological function of the epithelium. Actomyosin cable contraction is mainly regulated by myosin regulatory light chain (MRLC) phosphorylation by myosin light chain kinase (MLCK). However, MRLC and MLCK isoforms expressed in HCs and ISCs are unknown. Hence, we investigated the expression patterns and roles of MRLCs and MLCKs in HCs. Droplet digital PCR revealed that HCs expressed MYL12A/B and MYL9, which are non-muscle MRLC and smooth muscle MLCK (smMLCK), respectively. Immunofluorescence staining throughout the organ of Corti demonstrated that only MYL12 was expressed in the apical portion of HCs, whereas MYL12 and MYL9 were expressed on ISCs. In addition, purified MYL12B was phosphorylated by smMLCK in vitro, and the harvested HCs contained phosphorylated MYL12. Furthermore, accompanied by the expansion of the cell area of outer HCs, MYL12 phosphorylation was reduced by ML-7, which is an inhibitor of smMLCK. In conclusion, MYL12 phosphorylation by smMLCK contributed to the apical constriction-like cellular shape change of HCs possibly relating to the development of auditory epithelia and hearing function.

Pubmed PDF Web

An Alternative Explanation for Difficulties with Speech in Background Talkers: Abnormal Fusion of Vowels Across Fundamental Frequency and Ears

LAJ Reiss,MR Molis

Publication date 20-04-2021


Normal-hearing (NH) listeners use frequency cues, such as fundamental frequency (voice pitch), to segregate sounds into discrete auditory streams. However, many hearing-impaired (HI) individuals have abnormally broad binaural pitch fusion which leads to fusion and averaging of the original monaural pitches into the same stream instead of segregating the two streams (Oh and Reiss, 2017) and may similarly lead to fusion and averaging of speech streams across ears. In this study, using dichotic speech stimuli, we examined the relationship between speech fusion and vowel identification. Dichotic vowel perception was measured in NH and HI listeners, with across-ear fundamental frequency differences varied. Synthetic vowels /i/, /u/, /a/, and /ae/ were generated with three fundamental frequencies (F0) of 106.9, 151.2, and 201.8 Hz and presented dichotically through headphones. For HI listeners, stimuli were shaped according to NAL-NL2 prescriptive targets. Although the dichotic vowels presented were always different across ears, listeners were not informed that there were no single vowel trials and could identify one vowel or two different vowels on each trial. When there was no F0 difference between the ears, both NH and HI listeners were more likely to fuse the vowels and identify only one vowel. As ΔF0 increased, NH listeners increased the percentage of two-vowel responses, but HI listeners were more likely to continue to fuse vowels even with large ΔF0. Binaural tone fusion range was significantly correlated with vowel fusion rates in both NH and HI listeners. Confusion patterns with dichotic vowels differed from those seen with concurrent monaural vowels, suggesting different mechanisms behind the errors. Together, the findings suggest that broad fusion leads to spectral blending across ears, even for different ΔF0, and may hinder the stream segregation and understanding of speech in the presence of competing talkers.

Pubmed PDF Web

A Bridge over Troubled Listening: Improving Speech-in-Noise Perception by Children with Dyslexia

T Van Hirtum,P Ghesquière,J Wouters

Publication date 16-04-2021


Developmental dyslexia is most commonly associated with phonological processing difficulties. However, children with dyslexia may experience poor speech-in-noise perception as well. Although there is an ongoing debate whether a speech perception deficit is inherent to dyslexia or acts as an aggravating risk factor compromising learning to read indirectly, improving speech perception might boost reading-related skills and reading acquisition. In the current study, we evaluated advanced speech technology as applied in auditory prostheses, to promote and eventually normalize speech perception of school-aged children with dyslexia, i.e., envelope enhancement (EE). The EE strategy automatically detects and emphasizes onset cues and consequently reinforces the temporal structure of the speech envelope. Our results confirmed speech-in-noise perception difficulties by children with dyslexia. However, we found that exaggerating temporal “landmarks” of the speech envelope (i.e., amplitude rise time and modulations)—by using EE—passively and instantaneously improved speech perception in noise for children with dyslexia. Moreover, the benefit derived from EE was large enough to completely bridge the initial gap between children with dyslexia and their typical reading peers. Taken together, the beneficial outcome of EE suggests an important contribution of the temporal structure of the envelope to speech perception in noise difficulties in dyslexia, providing an interesting foundation for future intervention studies based on auditory and speech rhythm training.

Pubmed PDF Web

Super-enhancer Acquisition Drives FOXC2 Expression in Middle Ear Cholesteatoma

T Yamamoto-Fukuda,N Akiyama,H Kojima

Publication date 16-04-2021


Distinct histone modifications regulate gene expression in certain diseases, but little is known about histone epigenetics in middle ear cholesteatoma. It is known that histone acetylation destabilizes the nucleosome and chromatin structure and induces gene activation. The association of histone acetylation with chronic inflammatory diseases has been indicated in recent studies. In this study, we examined the localization of variously modified histone H3 acetylation at lysine 9, 14, 18, 23, and 27 in paraffin-embedded sections of human middle ear cholesteatoma (cholesteatoma) tissues and the temporal bones of an animal model of cholesteatoma immunohistochemically. As a result, we found that there was a significant increase of the expression levels of H3K27ac both in human cholesteatoma tissues and the animal model. In genetics, super-enhancers are clusters of enhancers that drive the transcription of genes involved in cell identity. Super-enhancers were originally defined using the H3K27ac signal, and then we used H3K27ac chromatin immunoprecipitation followed by sequencing to map the active cis-regulatory landscape in human cholesteatoma. Based on the results, we identified increased H3K27ac signals as super-enhancers of the FOXC2 loci, as well as increased protein of FOXC2 in cholesteatoma. Recent studies have indicated that menin-MLL inhibitor could suppress tumor growth through the control of histone H3 modification. In this study, we demonstrated that the expression of FOXC2 was inhibited by menin-MLL inhibitor in vivo. These findings indicate that FOXC2 expression under histone modifications promoted the pathogenesis of cholesteatoma and suggest that it may be a therapeutic target of cholesteatoma.

Pubmed PDF Web

Transient Delivery of a\xa0KCNQ2/3-Specific Channel Activator 1 Week After Noise Trauma Mitigates Noise-Induced Tinnitus

L Marinos,S Kouvaros,B Bizup,B Hambach,P Wipf,T Tzounopoulos

Publication date 01-04-2021


Exposure to loud noise can cause hearing loss and tinnitus in mice and humans. In mice, one major underlying mechanism of noise-induced tinnitus is hyperactivity of auditory brainstem neurons, due at least in part, to decreased Kv7.2/3 (KCNQ2/3) potassium channel activity. In our previous studies, we used a reflex-based mouse model of tinnitus and showed that administration of a non-specific KCNQ channel activator, immediately after noise trauma, prevented the development of noise-induced tinnitus, assessed 1 week after trauma. Subsequently, we developed RL-81, a very potent and highly specific activator of KCNQ2/3 channels. Here, to test the timing window within which RL-81 prevents tinnitus in mice, we modified and employed an operant animal model of tinnitus, where mice are trained to move in response to sound but not move in silence. Mice with behavioral evidence of tinnitus are expected to move in silence. We validated this mouse model by testing the effect of salicylate, which is known to induce tinnitus. We found that transient administration of RL-81 1 week after noise exposure did not affect hearing loss but reduced significantly the percentage of mice with behavioral evidence of tinnitus, assessed 2 weeks after noise exposure. Our results indicate that RL-81 is a promising drug candidate for further development for the treatment of noise-induced tinnitus.

Pubmed PDF Web

Copyright © KNO-T, 2020 | R/Abma