Journal of the Association for Research in Otolaryngology

Journal of the Association for Research in Otolaryngology

Sound Localization in Preweanling Mice Was More Severely Affected by Deleting the Kcna1 Gene Compared to Deleting Kcna2 , and a Curious Inverted-U Course of Development That Appeared to Exceed Adult Performance Was Observed in All Groups

13-08-2019 – JR Ison,PD Allen,BL Tempel,HM Brew

Journal Article

Abstract The submillisecond acuity for detecting rapid spatial and temporal fluctuations in acoustic stimuli observed in humans and laboratory animals depends in part on select groups of auditory neurons that preserve synchrony from the ears to the binaural nuclei in the brainstem. These fibers have specialized synapses and axons that use a low-threshold voltage-activated outward current, IKL, conducted through Kv1 potassium ion channels. These are in turn coupled with HCN channels that express a mixed cation inward mixed current, IH, to support precise synchronized firing. The behavioral evidence is that their respective Kcna1 or HCN1 genes are absent in adult mice; the results are weak startle reflexes, slow responding to noise offsets, and poor sound localization. The present behavioral experiments were motivated by an in vitro study reporting increased IKL in an auditory nucleus in Kcna2−/− mice lacking the Kv1.2 subunit, suggesting that Kcna2−/− mice might perform better than Kcna2+/+ mice. Because Kcna2−/− mice have only a 17–18-day lifespan, we compared both preweanling Kcna2−/− vs. Kcna2+/+ mice and Kcna1−/− vs. Kcna1+/+ mice at P12-P17/18; then, the remaining mice were tested at P23/P25. Both null mutant strains had a stunted physique, but the Kcna1−/− mice had severe behavioral deficits while those in Kcna2−/− mice were relatively few and minor. The in vitro increase of IKL could have resulted from Kv1.1 subunits substituting for Kv1.2 units and the loss of the inhibitory “managerial” effect of Kv1.2 on Kv1.1. However, any increased neuronal synchronicity that accompanies increased IKL may not have been enough to affect behavior. All mice performed unusually well on the early spatial tests, but then, they fell towards adult levels. This unexpected effect may reflect a shift from summated independent monaural pathways to integrated binaural processing, as has been suggested for similar observations for human infants.

Morphological Immaturity of the Neonatal Organ of Corti and Associated Structures in Humans

12-08-2019 – SWF Meenderink,CA Shera,MD Valero,MC Liberman,C Abdala

Journal Article

Abstract Although anatomical development of the cochlear duct is thought to be complete by term birth, human newborns continue to show postnatal immaturities in functional measures such as otoacoustic emissions (OAEs). Some of these OAE immaturities are no doubt influenced by incomplete maturation of the external and middle ears in infants; however, the observed prolongation of distortion-product OAE phase-gradient delays in newborns cannot readily be explained by conductive factors. This functional immaturity suggests that the human cochlea at birth may lack fully adult-like traveling-wave motion. In this study, we analyzed temporal-bone sections at the light microscopic level in newborns and adults to quantify dimensions and geometry of cochlear structures thought to influence the mechanical response of the cochlea. Contrary to common belief, results show multiple morphological immaturities along the length of the newborn spiral, suggesting that important refinements in the size and shape of the sensory epithelium and associated structures continue after birth. Specifically, immaturities of the newborn basilar membrane and organ of Corti are consistent with a more compliant and less massive cochlear partition, which could produce longer DPOAE delays and a shifted frequency-place map in the neonatal ear.

A Physiologically Inspired Model for Solving the Cocktail Party Problem

07-08-2019 – KF Chou,J Dong,HS Colburn,K Sen

Journal Article

Abstract At a cocktail party, we can broadly monitor the entire acoustic scene to detect important cues (e.g., our names being called, or the fire alarm going off), or selectively listen to a target sound source (e.g., a conversation partner). It has recently been observed that individual neurons in the avian field L (analog to the mammalian auditory cortex) can display broad spatial tuning to single targets and selective tuning to a target embedded in spatially distributed sound mixtures. Here, we describe a model inspired by these experimental observations and apply it to process mixtures of human speech sentences. This processing is realized in the neural spiking domain. It converts binaural acoustic inputs into cortical spike trains using a multi-stage model composed of a cochlear filter-bank, a midbrain spatial-localization network, and a cortical network. The output spike trains of the cortical network are then converted back into an acoustic waveform, using a stimulus reconstruction technique. The intelligibility of the reconstructed output is quantified using an objective measure of speech intelligibility. We apply the algorithm to single and multi-talker speech to demonstrate that the physiologically inspired algorithm is able to achieve intelligible reconstruction of an “attended” target sentence embedded in two other non-attended masker sentences. The algorithm is also robust to masker level and displays performance trends comparable to humans. The ideas from this work may help improve the performance of hearing assistive devices (e.g., hearing aids and cochlear implants), speech-recognition technology, and computational algorithms for processing natural scenes cluttered with spatially distributed acoustic objects.

Pitch Matching Adapts Even for Bilateral Cochlear Implant Users with Relatively Small Initial Pitch Differences Across the Ears

05-08-2019 – JM Aronoff,HE Staisloff,A Kirchner,DH Lee,J Stelmach

Journal Article

Abstract There is often a mismatch for bilateral cochlear implant (CI) users between the electrodes in the two ears that receive the same frequency allocation and the electrodes that, when stimulated, yield the same pitch. Studies with CI users who have extreme mismatches between the two ears show that adaptation occurs in terms of pitch matching, reducing the difference between which electrodes receive the same frequency allocation and which ones produce the same pitch. The considerable adaptation that occurs for these extreme cases suggests that adaptation should be sufficient to overcome the relatively minor mismatches seen with typical bilateral CI users. However, even those with many years of bilateral CI use continue to demonstrate a mismatch. This may indicate that adaptation only occurs when there are large mismatches. Alternatively, it may indicate that adaptation occurs regardless of the magnitude of the mismatch, but that adaptation is proportional to the magnitude of the mismatch, and thus never fully counters the original mismatch. To investigate this, six bilateral CI users with initial pitch-matching mismatches of less than 3 mm completed a pitch-matching task near the time of activation, 6 months after activation, and 1 year after activation. Despite relatively small initial mismatches, the results indicated that adaptation still occurred.

Investigating the Effect of Cochlear Synaptopathy on Envelope Following Responses Using a Model of the Auditory Nerve

01-08-2019 – G Encina-Llamas,JM Harte,T Dau,B Shinn-Cunningham,B Epp

Journal Article

Abstract The healthy auditory system enables communication in challenging situations with high levels of background noise. Yet, despite normal sensitivity to pure tones, many listeners complain about having difficulties in such situations. Recent animal studies demonstrated that noise overexposure that produces temporary threshold shifts can cause the loss of auditory nerve (AN) fiber synapses (i.e., cochlear synaptopathy, CS), which appears to predominantly affect medium- and low-spontaneous rate (SR) fibers. In the present study, envelope following response (EFR) magnitude-level functions were recorded in normal hearing (NH) threshold and mildly hearing-impaired (HI) listeners with thresholds elevated above 2 k
Hz. EFRs were elicited by sinusoidally amplitude modulated (SAM) tones presented in quiet with a carrier frequency of 2 k
Hz, modulated at 93 Hz, and modulation depths of 0.85 (deep) and 0.25 (shallow). While EFR magnitude-level functions for deeply modulated tones were similar for all listeners, EFR magnitudes for shallowly modulated tones were reduced at medium stimulation levels in some NH threshold listeners and saturated in all HI listeners for the whole level range. A phenomenological model of the AN was used to investigate the extent to which hair-cell dysfunction and/or CS could explain the trends observed in the EFR data. Hair-cell dysfunction alone, including postulated elevated hearing thresholds at extended high frequencies (EHF) beyond 8 k
Hz, could not account for the recorded EFR data. Postulated CS led to simulations generally consistent with the recorded data, but a loss of all types of AN fibers was required within the model framework. The effects of off-frequency contributions (i.e., away from the characteristic place of the stimulus) and the differential loss of different AN fiber types on EFR magnitude-level functions were analyzed. When using SAM tones in quiet as the stimulus, model simulations suggested that (1) EFRs are dominated by the activity of high-SR fibers at all stimulus intensities, and (2) EFRs at medium-to-high stimulus levels are dominated by off-frequency contributions.

A Site-Selection Strategy Based on Polarity Sensitivity for Cochlear Implants: Effects on Spectro-Temporal Resolution and Speech Perception

01-08-2019 – T Goehring,A Archer-Boyd,JM Deeks,JG Arenberg,RP Carlyon

Journal Article

ABSTRACT Thresholds of asymmetric pulses presented to cochlear implant (CI) listeners depend on polarity in a way that differs across subjects and electrodes. It has been suggested that lower thresholds for cathodic-dominant compared to anodic-dominant pulses reflect good local neural health. We evaluated the hypothesis that this polarity effect (PE) can be used in a site-selection strategy to improve speech perception and spectro-temporal resolution. Detection thresholds were measured in eight users of Advanced Bionics CIs for 80-pps, triphasic, monopolar pulse trains where the central high-amplitude phase was either anodic or cathodic. Two experimental MAPs were then generated for each subject by deactivating the five electrodes with either the highest or the lowest PE magnitudes (cathodic minus anodic threshold). Performance with the two experimental MAPs was evaluated using two spectro-temporal tests (Spectro-Temporal Ripple for Investigating Processor Effectivenes
S (STRIPES; Archer-Boyd et al. in J Acoust Soc Am 144:2983–2997, 2018) and Spectral-Temporally Modulated Ripple Test (SMRT; Aronoff and Landsberger in J Acoust Soc Am 134:EL217–EL222, 2013)) and with speech recognition in quiet and in noise. Performance was also measured with an experimental MAP that used all electrodes, similar to the subjects’ clinical MAP. The PE varied strongly across subjects and electrodes, with substantial magnitudes relative to the electrical dynamic range. There were no significant differences in performance between the three MAPs at group level, but there were significant effects at subject level—not all of which were in the hypothesized direction—consistent with previous reports of a large variability in CI users’ performance and in the potential benefit of site-selection strategies. The STRIPES but not the SMRT test successfully predicted which strategy produced the best speech-in-noise performance on a subject-by-subject basis. The average PE across electrodes correlated significantly with subject age, duration of deafness, and speech perception scores, consistent with a relationship between PE and neural health. These findings motivate further investigations into site-specific measures of neural health and their application to CI processing strategies.

The fMRI Data of Thompson et al. (2006) Do Not Constrain How the Human Midbrain Represents Interaural Time Delay

01-08-2019 – RM Stern,HS Colburn,LR Bernstein,C Trahiotis

Journal Article

Abstract This commentary provides an alternate interpretation of the f
MRI data that were presented in a communication to the journal Nature Neuroscience (Thompson et al., Nat. Neurosci. 9: 1096–1098, 2006 ). The authors argued that their observations demonstrated that traditional models of binaural hearing which incorporate “internal delays,” such as the coincidence-counting mechanism proposed by Jeffress and quantified by Colburn, are invalid, and that a new model for human interaural time delay processing must be developed. We argue that the f
MRI data presented do not strongly favor either the refutation or the retention of the traditional models, although they may be useful in constraining the physiological sites of various processing stages. The conclusions of Thompson et al. are based on the locations of maximal activity in the midbrain in response to selected binaural signals. These locations are inconsistent with well-known perceptual attributes of the stimuli under consideration, as is noted by the authors, which suggests that further processing is involved in forming the percept of subjective lateral position.

Exploring the Role of Medial Olivocochlear Efferents on the Detection of Amplitude Modulation for Tones Presented in Noise

01-08-2019 – M Wojtczak,AM Klang,NT Torunsky

Journal Article

Abstract The medial olivocochlear reflex has been hypothesized to improve the detection and discrimination of dynamic signals in noisy backgrounds. This hypothesis was tested here by comparing behavioral outcomes with otoacoustic emissions. The effects of a precursor on amplitude-modulation (AM) detection were measured for a 1- and 6-k
Hz carrier at levels of 40, 60, and 80 d
B SPL in a two-octave-wide noise masker with a level designed to produce poor, but above-chance, performance. Three types of precursor were used: a two-octave noise band, an inharmonic complex tone, and a pure tone. Precursors had the same overall level as the simultaneous noise masker that immediately followed the precursor. The noise precursor produced a large improvement in AM detection for both carrier frequencies and at all three levels. The complex tone produced a similarly large improvement in AM detection at the highest level but had a smaller effect for the two lower carrier levels. The tonal precursor did not significantly affect AM detection in noise. Comparisons of behavioral thresholds and medial olivocochlear efferent effects on stimulus frequency otoacoustic emissions measured with similar stimuli did not support the hypothesis that efferent-based reduction of cochlear responses contributes to the precursor effects on AM detection.

Evaluating Psychophysical Polarity Sensitivity as an Indirect Estimate of Neural Status in Cochlear Implant Listeners

01-08-2019 – KN Jahn,JG Arenberg

Journal Article

Abstract The physiological integrity of spiral ganglion neurons is presumed to influence cochlear implant (CI) outcomes, but it is difficult to measure neural health in CI listeners. Modeling data suggest that, when peripheral processes have degenerated, anodic stimulation may be a more effective neural stimulus than cathodic stimulation. The primary goal of the present study was to evaluate the emerging theory that polarity sensitivity reflects neural health in CI listeners. An ideal in vivo estimate of neural integrity should vary independently of other factors known to influence the CI electrode-neuron interface, such as electrode position and tissue impedances. Thus, the present analyses quantified the relationships between polarity sensitivity and (1) electrode position estimated via computed tomography imaging, (2) intracochlear resistance estimated via electrical field imaging, and (3) focused (steered quadrupolar) behavioral thresholds, which are believed to reflect a combination of local neural health, electrode position, and intracochlear resistance. Eleven adults with Advanced Bionics devices participated. To estimate polarity sensitivity, electrode-specific behavioral thresholds in response to monopolar, triphasic pulses where the central high-amplitude phase was either anodic (CAC) or cathodic (ACA) were measured. The polarity effect was defined as the difference in threshold response to the ACA compared to the CAC stimulus. Results indicated that the polarity effect was not related to electrode-to-modiolus distance, electrode scalar location, or intracochlear resistance. Large, positive polarity effects, which may indicate SGN degeneration, were associated with relatively high focused behavioral thresholds. The polarity effect explained a significant portion of the variation in focused thresholds, even after controlling for electrode position and intracochlear resistance. Overall, these results provide support for the theory that the polarity effect may reflect neural integrity in CI listeners. Evidence from this study supports further investigation into the use of polarity sensitivity for optimizing individual CI programming parameters.

Virtual Rhesus Labyrinth Model Predicts Responses to Electrical Stimulation Delivered by a Vestibular Prosthesis

01-08-2019 – A Hedjoudje,R Hayden,C Dai,J Ahn,M Rahman,F Risi,J Zhang,S Mori,CC Della Santina

Journal Article

Abstract To better understand the spread of prosthetic current in the inner ear and to facilitate design of electrode arrays and stimulation protocols for a vestibular implant system intended to restore sensation after loss of vestibular hair cell function, we created a model of the primate labyrinth. Because the geometry of the implanted ear is complex, accurately modeling effects of prosthetic stimuli on vestibular afferent activity required a detailed representation of labyrinthine anatomy. Model geometry was therefore generated from three-dimensional (3D) reconstructions of a normal rhesus temporal bone imaged using micro-MRI and micro-CT. For systematically varied combinations of active and return electrode location, the extracellular potential field during a biphasic current pulse was computed using finite element methods. Potential field values served as inputs to stochastic, nonlinear dynamic models for each of 2415 vestibular afferent axons, each with unique origin on the neuroepithelium and spiking dynamics based on a modified Smith and Goldberg model. We tested the model by comparing predicted and actual 3D vestibulo-ocular reflex (VOR) responses for eye rotation elicited by prosthetic stimuli. The model was individualized for each implanted animal by placing model electrodes in the standard labyrinth geometry based on CT localization of actual implanted electrodes. Eye rotation 3D axes were predicted from relative proportions of model axons excited within each of the three ampullary nerves, and predictions were compared to archival eye movement response data measured in three alert rhesus monkeys using 3D scleral coil oculography. Multiple empirically observed features emerged as properties of the model, including effects of changing active and return electrode position. The model predicts improved prosthesis performance when the reference electrode is in the labyrinth’s common crus (CC) rather than outside the temporal bone, especially if the reference electrode is inserted nearly to the junction of the CC with the vestibule. Extension of the model to human anatomy should facilitate optimal design of electrode arrays for clinical application.

Neural Encoding of Amplitude Modulations in the Human Efferent System

01-08-2019 – SK Mishra,M Biswal

Journal Article

Abstract Most natural sounds, including speech, exhibit temporal amplitude fluctuations. This information is encoded as amplitude modulations (AM)—essential for auditory and speech perception. The neural representation of AM has been studied at various stages of the ascending auditory system from the auditory nerve to the cortex. In contrast, research on neural coding of AM in the efferent pathway has been extremely limited. The objective of this study was to investigate the encoding of AM signals in the medial olivocochlear system by measuring the modulation transfer functions of the efferent response in humans. A secondary goal was to replicate the controversial findings from the literature that efferent stimulation produces larger effects for the AM elicitor with 100 Hz modulation frequency in comparison with the unmodulated elicitor. The efferent response was quantified by measuring changes in stimulus-frequency otoacoustic emission magnitude due to various modulated and unmodulated elicitors. Unmodulated, broadband noise elicitors yielded either slightly larger or similar efferent responses relative to modulated elicitors depending on the modulation frequency. Efferent responses to the unmodulated and modulated elicitors with 100 Hz modulation frequency were not significantly different. The efferent system encoding of AM sounds—modulation transfer functions—can be modeled with a first-order Butterworth low-pass filter with different cutoff frequencies for ipsilateral and contralateral elicitors. The ipsilateral efferent pathway showed a greater sensitivity to AM information comparted to the contralateral pathway. Efferent modulation transfer functions suggest that the ability of the system to follow AM decreases with increasing modulation frequency and that efferents may not be fully operating on the envelope of the speech.

AAV-Mediated Neurotrophin Gene Therapy Promotes Improved Survival of Cochlear Spiral Ganglion Neurons in Neonatally Deafened Cats: Comparison of AAV2-hBDNF and AAV5-hGDNF

01-08-2019 – PA Leake,SJ Rebscher,C Dore’,O Akil

Journal Article

Abstract Outcomes with contemporary cochlear implants (CI) depend partly upon the survival and condition of the cochlear spiral ganglion (SG) neurons. Previous studies indicate that CI stimulation can ameliorate SG neural degeneration after deafness, and brain-derived neurotrophic factor (BDNF) delivered by an osmotic pump can further improve neural survival. However, direct infusion of BDNF elicits undesirable side effects, and osmotic pumps are impractical for clinical application. In this study, we explored the potential for two adeno-associated viral vectors (AAV) to elicit targeted neurotrophic factor expression in the cochlea and promote improved SG and radial nerve fiber survival. Juvenile cats were deafened prior to hearing onset by systemic aminoglycoside injections. Auditory brainstem responses showed profound hearing loss by 16–18 days postnatal. At ~ 4 weeks of age, AAV2-GFP (green fluorescent protein), AAV5-GFP, AAV2-h
BDNF, or AAV5-h
GDNF (glial-derived neurotrophic factor) was injected through the round window unilaterally. For GFP immunofluorescence, animals were studied ~ 4 weeks post-injection to assess cell types transfected and their distributions. AAV2-GFP immunofluorescence demonstrated strong expression of the GFP reporter gene in residual inner (IHCs), outer hair cells (OHCs), inner pillar cells, and in some SG neurons throughout the cochlea. AAV5-GFP elicited robust transduction of IHCs and some SG neurons, but few OHCs and supporting cells. After AAV-neurotrophic factor injections, animals were studied ~ 3 months post-injection to evaluate neural survival. AAV5-h
GDNF elicited a modest neurotrophic effect, with 6 % higher SG density, but had no trophic effect on radial nerve fiber survival, and undesirable ectopic fiber sprouting occurred. AAV2-h
BDNF elicited a similar 6 % increase in SG survival, but also resulted in greatly improved radial nerve fiber survival, with no ectopic fiber sprouting. A further study assessed whether AAV2-h
BDNF neurotrophic effects would persist over longer post-injection periods. Animals examined 6 months after virus injection showed substantial neurotrophic effects, with 14 % higher SG density and greatly improved radial nerve fiber survival. Our results suggest that AAV-neurotrophin gene therapy can elicit expression of physiological concentrations of neurotrophins in the cochlea, supporting improved SG neuronal and radial nerve fiber survival while avoiding undesirable side effects. These studies also demonstrate the potential for application of cochlear gene therapy in a large mammalian cochlea comparable to the human cochlea and in an animal model of congenital/early acquired deafness.

Quantitative Assessment of Anti-Gravity Reflexes to Evaluate Vestibular Dysfunction in Rats

11-07-2019 – V Martins-Lopes,A Bellmunt,EA Greguske,AF Maroto,P Boadas-Vaello,J Llorens

Journal Article

Abstract The tail-lift reflex and the air-righting reflex are anti-gravity reflexes in rats that depend on vestibular function. To obtain objective and quantitative measures of performance, we recorded these reflexes with slow-motion video in two experiments. In the first experiment, vestibular dysfunction was elicited by acute exposure to 0 (control), 400, 600, or 1000 mg/kg of 3,3′-iminodipropionitrile (IDPN), which causes dose-dependent hair cell degeneration. In the second, rats were exposed to sub-chronic IDPN in the drinking water for 0 (control), 4, or 8 weeks; this causes reversible or irreversible loss of vestibular function depending on exposure time. In the tail-lift test, we obtained the minimum angle defined during the lift and descent maneuver by the nose, the back of the neck, and the base of the tail. In the air-righting test, we obtained the time to right the head. We also obtained vestibular dysfunction ratings (VDRs) using a previously validated behavioral test battery. Each measure, VDR, tail-lift angle, and air-righting time demonstrated dose-dependent loss of vestibular function after acute IDPN and time-dependent loss of vestibular function after sub-chronic IDPN. All measures showed high correlations between each other, and maximal correlation coefficients were found between VDRs and tail-lift angles. In scanning electron microscopy evaluation of the vestibular sensory epithelia, the utricle and the saccule showed diverse pathological outcomes, suggesting that they have a different role in these reflexes. We conclude that these anti-gravity reflexes provide useful objective and quantitative measures of vestibular function in rats that are open to further development.

Pre-operative Brain Imaging Using Functional Near-Infrared Spectroscopy Helps Predict Cochlear Implant Outcome in Deaf Adults

08-07-2019 – CA Anderson,IM Wiggins,PT Kitterick,DEH Hartley

Journal Article

Abstract Currently, it is not possible to accurately predict how well a deaf individual will be able to understand speech when hearing is (re)introduced via a cochlear implant. Differences in brain organisation following deafness are thought to contribute to variability in speech understanding with a cochlear implant and may offer unique insights that could help to more reliably predict outcomes. An emerging optical neuroimaging technique, functional near-infrared spectroscopy (f
NIRS), was used to determine whether a pre-operative measure of brain activation could explain variability in cochlear implant (CI) outcomes and offer additional prognostic value above that provided by known clinical characteristics. Cross-modal activation to visual speech was measured in bilateral superior temporal cortex of pre- and post-lingually deaf adults before cochlear implantation. Behavioural measures of auditory speech understanding were obtained in the same individuals following 6 months of cochlear implant use. The results showed that stronger pre-operative cross-modal activation of auditory brain regions by visual speech was predictive of poorer auditory speech understanding after implantation. Further investigation suggested that this relationship may have been driven primarily by the inclusion of, and group differences between, pre- and post-lingually deaf individuals. Nonetheless, pre-operative cortical imaging provided additional prognostic value above that of influential clinical characteristics, including the age-at-onset and duration of auditory deprivation, suggesting that objectively assessing the physiological status of the brain using f
NIRS imaging pre-operatively may support more accurate prediction of individual CI outcomes. Whilst activation of auditory brain regions by visual speech prior to implantation was related to the CI user’s clinical history of deafness, activation to visual speech did not relate to the future ability of these brain regions to respond to auditory speech stimulation with a CI. Greater pre-operative activation of left superior temporal cortex by visual speech was associated with enhanced speechreading abilities, suggesting that visual speech processing may help to maintain left temporal lobe specialisation for language processing during periods of profound deafness.

Human Click-Based Echolocation of Distance: Superfine Acuity and Dynamic Clicking Behaviour

08-07-2019 – L Thaler,HPJC De Vos,D Kish,M Antoniou,CJ Baker,MCJ Hornikx

Journal Article

Abstract Some people who are blind have trained themselves in echolocation using mouth clicks. Here, we provide the first report of psychophysical and clicking data during echolocation of distance from a group of 8 blind people with experience in mouth click-based echolocation (daily use for > 3 years). We found that experienced echolocators can detect changes in distance of 3 cm at a reference distance of 50 cm, and a change of 7 cm at a reference distance of 150 cm, regardless of object size (i.e. 28.5 cm vs. 80 cm diameter disk). Participants made mouth clicks that were more intense and they made more clicks for weaker reflectors (i.e. same object at farther distance, or smaller object at same distance), but number and intensity of clicks were adjusted independently from one another. The acuity we found is better than previous estimates based on samples of sighted participants without experience in echolocation or individual experienced participants (i.e. single blind echolocators tested) and highlights adaptation of the perceptual system in blind human echolocators. Further, the dynamic adaptive clicking behaviour we observed suggests that number and intensity of emissions serve separate functions to increase SNR. The data may serve as an inspiration for low-cost (i.e. non-array based) artificial ‘cognitive’ sonar and radar systems, i.e. signal design, adaptive pulse repetition rate and intensity. It will also be useful for instruction and guidance for new users of echolocation.

Osteoclasts Modulate Bone Erosion in Cholesteatoma via RANKL Signaling

28-06-2019 – R Imai,T Sato,Y Iwamoto,Y Hanada,M Terao,Y Ohta,Y Osaki,T Imai,T Morihana,S Okazaki,K Oshima,D Okuzaki,I Katayama,H Inohara

Journal Article

Abstract Cholesteatoma starts as a retraction of the tympanic membrane and expands into the middle ear, eroding the surrounding bone and causing hearing loss and other serious complications such as brain abscess and meningitis. Currently, the only effective treatment is complete surgical removal, but the recurrence rate is relatively high. In rheumatoid arthritis (RA), osteoclasts are known to be responsible for bone erosion and undergo differentiation and activation by receptor activator of NF-κB ligand (RANKL), which is secreted by synovial fibroblasts, T cells, and B cells. On the other hand, the mechanism of bone erosion in cholesteatoma is still controversial. In this study, we found that a significantly larger number of osteoclasts were observed on the eroded bone adjacent to cholesteatomas than in unaffected areas, and that fibroblasts in the cholesteatoma perimatrix expressed RANKL. We also investigated upstream transcription factors of RANKL using RNA sequencing results obtained via Ingenuity Pathways Analysis, a tool that identifies relevant targets in molecular biology systems. The concentrations of four candidate factors, namely interleukin-1β, interleukin-6, tumor necrosis factor α, and prostaglandin E2, were increased in cholesteatomas compared with normal skin. Furthermore, interleukin-1β was expressed in infiltrating inflammatory cells in the cholesteatoma perimatrix. This is the first report demonstrating that a larger-than-normal number of osteoclasts are present in cholesteatoma, and that the disease involves upregulation of factors related to osteoclast activation. Our study elucidates the molecular basis underlying bone erosion in cholesteatoma.

Rapamycin Protects Spiral Ganglion Neurons from Gentamicin-Induced Degeneration In Vitro

24-06-2019 – S Guo,N Xu,P Chen,Y Liu,X Qi,S Liu,C Li,J Tang

Journal Article

Abstract Gentamicin, one of the most widely used aminoglycoside antibiotics, is known to have toxic effects on the inner ear. Taken up by cochlear hair cells and spiral ganglion neurons (SGNs), gentamicin induces the accumulation of reactive oxygen species (ROS) and initiates apoptosis or programmed cell death, resulting in a permanent and irreversible hearing loss. Since the survival of SGNs is specially required for cochlear implant, new procedures that prevent SGN cell loss are crucial to the success of cochlear implantation. ROS modulates the activity of the mammalian target of rapamycin (m
TOR) signaling pathway, which mediates apoptosis or autophagy in cells of different organs. However, whether m
TOR signaling plays an essential role in the inner ear and whether it is involved in the ototoxic side effects of gentamicin remain unclear. In the present study, we found that gentamicin induced apoptosis and cell loss of SGNs in vivo and significantly decreased the density of SGN and outgrowth of neurites in cultured SGN explants. The phosphorylation levels of ribosomal S6 kinase and elongation factor 4E binding protein 1, two critical kinases in the m
TOR complex 1 (m
TORC1) signaling pathway, were modulated by gentamicin application in the cochlea. Meanwhile, rapamycin, a specific inhibitor of m
TORC1, was co-applied with gentamicin to verify the role of m
TOR signaling. We observed that the density of SGN and outgrowth of neurites were significantly increased by rapamycin treatment. Our finding suggests that m
TORC1 is hyperactivated in the gentamicin-induced degeneration of SGNs, and rapamycin promoted SGN survival and outgrowth of neurites.

Cortical Auditory Evoked Potentials in Response to Frequency Changes with Varied Magnitude, Rate, and Direction

05-06-2019 – BMD Vonck,MJW Lammers,M van der Waals,GA van Zanten,H Versnel

Journal Article

Abstract Recent literature on cortical auditory evoked potentials has focused on correlations with hearing performance with the aim to develop an objective clinical tool. However, cortical responses depend on the type of stimulus and choice of stimulus parameters. This study investigates cortical auditory evoked potentials to sound changes, so-called acoustic change complexes (ACC), and the effects of varying three stimulus parameters. In twelve normal-hearing subjects, ACC waveforms were evoked by presenting frequency changes with varying magnitude, rate, and direction. The N1 amplitude and latency were strongly affected by magnitude, which is known from the literature. Importantly, both of these N1 variables were also significantly affected by both rate and direction of the frequency change. Larger and earlier N1 peaks were evoked by increasing the magnitude and rate of the frequency change and with downward rather than upward direction of the frequency change. The P2 amplitude increased with magnitude and depended, to a lesser extent, on rate of the frequency change while direction had no effect on this peak. The N1–P2 interval was not affected by any of the stimulus parameters. In conclusion, the ACC is most strongly affected by magnitude and also substantially by rate and direction of the change. These stimulus dependencies should be considered in choosing stimuli for ACCs as objective clinical measure of hearing performance.

The Effect of Stimulus Polarity on the Relation Between Pitch Ranking and ECAP Spread of Excitation in Cochlear Implant Users

01-06-2019 – ER Spitzer,S Choi,ML Hughes

Journal Article

Abstract Although modern cochlear implants (CIs) use cathodic-leading symmetrical biphasic pulses to stimulate the auditory nerve, a growing body of evidence suggests that anodic-leading pulses may be more effective. The positive polarity has been shown to produce larger electrically evoked compound action potential (ECAP) amplitudes, steeper slope of the amplitude growth function, and broader spread of excitation (SOE) patterns. Polarity has also been shown to influence pitch perception. It remains unclear how polarity affects the relation between physiological SOE and psychophysical pitch perception. Using a within-subject design, we examined the correlation between performance on a pitch-ranking task and spatial separation between SOE patterns for anodic and cathodic-leading symmetric biphasic pulses for 14 CI ears. Overall, there was no effect of polarity on either ECAP SOE patterns, pitch ranking performance, or the relation between the two. This result is likely due the use of symmetric biphasic pulses, which may have reduced the size of the effect previously observed for pseudomonophasic pulses. Further research is needed to determine if a pseudomonophasic stimulus might further improve the relation between physiology and pitch perception.

Effects of Musical Training and Hearing Loss on Fundamental Frequency Discrimination and Temporal Fine Structure Processing: Psychophysics and Modeling

01-06-2019 – F Bianchi,LH Carney,T Dau,S Santurette

Journal Article

Abstract Several studies have shown that musical training leads to improved fundamental frequency (F0) discrimination for young listeners with normal hearing (NH). It is unclear whether a comparable effect of musical training occurs for listeners whose sensory encoding of F0 is degraded. To address this question, the effect of musical training was investigated for three groups of listeners (young NH, older NH, and older listeners with hearing impairment, HI). In a first experiment, F0 discrimination was investigated using complex tones that differed in harmonic content and phase configuration (sine, positive, or negative Schroeder). Musical training was associated with significantly better F0 discrimination of complex tones containing low-numbered harmonics for all groups of listeners. Part of this effect was caused by the fact that musicians were more robust than non-musicians to harmonic roving. Despite the benefit relative to their non-musicians counterparts, the older musicians, with or without HI, performed worse than the young musicians. In a second experiment, binaural sensitivity to temporal fine structure (TFS) cues was assessed for the same listeners by estimating the highest frequency at which an interaural phase difference was perceived. Performance was better for musicians for all groups of listeners and the use of TFS cues was degraded for the two older groups of listeners. These findings suggest that musical training is associated with an enhancement of both TFS cues encoding and F0 discrimination in young and older listeners with or without HI, although the musicians’ benefit decreased with increasing hearing loss. Additionally, models of the auditory periphery and midbrain were used to examine the effect of HI on F0 encoding. The model predictions reflected the worsening in F0 discrimination with increasing HI and accounted for up to 80 % of the variance in the data.