Journal of the Association for Research in Otolaryngology

Reciprocal Matched Filtering in the Inner Ear of the African Clawed Frog ( Xenopus laevis )

06-01-2020 –

Abstract Anurans (frogs and toads) are the most vocal amphibians. In most species, only males produce advertisement calls for defending territories and attracting mates. Female vocalizations are the exceptions among frogs, however in the African clawed frog (Xenopus laevis) both males and females produce distinct vocalizations. The matched filter hypothesis predicts a correspondence between peripheral auditory tuning of receivers and properties of species-specific acoustic signals, but few studies have assessed this relationship between the sexes. Measuring hearing sensitivity with a binaural recording of distortion product otoacoustic emissions, we have found that the ears of the males of this species are tuned to the dominant frequency of the female’s calls, whereas the ears of the females are tuned close to the dominant frequency of the male’s calls. Our findings provide support for the matched filter hypothesis extended to include male-female calling. This unique example of reciprocal matched filtering ensures that males and females communicate effectively in high levels of background noise, each sex being most sensitive to the frequencies of the other sex’s calls.

Human Auditory Detection and Discrimination Measured with the Pupil Dilation Response

02-12-2019 – ADS Bala,EA Whitchurch,TT Takahashi

Journal Article

Abstract In the standard Hughson-Westlake hearing tests (Carhart and Jerger 1959), patient responses like a button press, raised hand, or verbal response are used to assess detection of brief test signals such as tones of varying pitch and level. Because of its reliance on voluntary responses, Hughson-Westlake audiometry is not suitable for patients who cannot follow instructions reliably, such as pre-lingual infants (Northern and Downs 2002). As an alternative approach, we explored the use of the pupillary dilation response (PDR), a short-latency component of the orienting response evoked by novel stimuli, as an indicator of sound detection. The pupils of 31 adult participants (median age 24 years) were monitored with an infrared video camera during a standard hearing test in which they indicated by button press whether or not they heard narrowband noises centered at 1, 2, 4, and 8 k
Hz. Tests were conducted in a quiet, carpeted office. Pupil size was summed over the first 1750 ms after stimulus delivery, excluding later dilations linked to expenditure of cognitive effort (Kahneman and Beatty 1966; Kahneman et al. 1969). The PDR yielded thresholds comparable to the standard test at all center frequencies tested, suggesting that the PDR is as sensitive as traditional methods of assessing detection. We also tested the effects of repeating a stimulus on the habituation of the PDR. Results showed that habituation can be minimized by operating at near-threshold stimulus levels. At sound levels well above threshold, the PDR habituated but could be recovered by changing the frequency or sound level, suggesting that the PDR can also be used to test stimulus discrimination. Given these features, the PDR may be useful as an audiometric tool or as a means of assessing auditory discrimination in those who cannot produce a reliable voluntary response.

Sound Localization in Preweanling Mice Was More Severely Affected by Deleting the Kcna1 Gene Compared to Deleting Kcna2 , and a Curious Inverted-U Course of Development That Appeared to Exceed Adult Performance Was Observed in All Groups

01-12-2019 – JR Ison,PD Allen,BL Tempel,HM Brew

Journal Article

Abstract The submillisecond acuity for detecting rapid spatial and temporal fluctuations in acoustic stimuli observed in humans and laboratory animals depends in part on select groups of auditory neurons that preserve synchrony from the ears to the binaural nuclei in the brainstem. These fibers have specialized synapses and axons that use a low-threshold voltage-activated outward current, IKL, conducted through Kv1 potassium ion channels. These are in turn coupled with HCN channels that express a mixed cation inward mixed current, IH, to support precise synchronized firing. The behavioral evidence is that their respective Kcna1 or HCN1 genes are absent in adult mice; the results are weak startle reflexes, slow responding to noise offsets, and poor sound localization. The present behavioral experiments were motivated by an in vitro study reporting increased IKL in an auditory nucleus in Kcna2−/− mice lacking the Kv1.2 subunit, suggesting that Kcna2−/− mice might perform better than Kcna2+/+ mice. Because Kcna2−/− mice have only a 17–18-day lifespan, we compared both preweanling Kcna2−/− vs. Kcna2+/+ mice and Kcna1−/− vs. Kcna1+/+ mice at P12-P17/18; then, the remaining mice were tested at P23/P25. Both null mutant strains had a stunted physique, but the Kcna1−/− mice had severe behavioral deficits while those in Kcna2−/− mice were relatively few and minor. The in vitro increase of IKL could have resulted from Kv1.1 subunits substituting for Kv1.2 units and the loss of the inhibitory “managerial” effect of Kv1.2 on Kv1.1. However, any increased neuronal synchronicity that accompanies increased IKL may not have been enough to affect behavior. All mice performed unusually well on the early spatial tests, but then, they fell towards adult levels. This unexpected effect may reflect a shift from summated independent monaural pathways to integrated binaural processing, as has been suggested for similar observations for human infants.

A Physiologically Inspired Model for Solving the Cocktail Party Problem

01-12-2019 – KF Chou,J Dong,HS Colburn,K Sen

Journal Article

Abstract At a cocktail party, we can broadly monitor the entire acoustic scene to detect important cues (e.g., our names being called, or the fire alarm going off), or selectively listen to a target sound source (e.g., a conversation partner). It has recently been observed that individual neurons in the avian field L (analog to the mammalian auditory cortex) can display broad spatial tuning to single targets and selective tuning to a target embedded in spatially distributed sound mixtures. Here, we describe a model inspired by these experimental observations and apply it to process mixtures of human speech sentences. This processing is realized in the neural spiking domain. It converts binaural acoustic inputs into cortical spike trains using a multi-stage model composed of a cochlear filter-bank, a midbrain spatial-localization network, and a cortical network. The output spike trains of the cortical network are then converted back into an acoustic waveform, using a stimulus reconstruction technique. The intelligibility of the reconstructed output is quantified using an objective measure of speech intelligibility. We apply the algorithm to single and multi-talker speech to demonstrate that the physiologically inspired algorithm is able to achieve intelligible reconstruction of an “attended” target sentence embedded in two other non-attended masker sentences. The algorithm is also robust to masker level and displays performance trends comparable to humans. The ideas from this work may help improve the performance of hearing assistive devices (e.g., hearing aids and cochlear implants), speech-recognition technology, and computational algorithms for processing natural scenes cluttered with spatially distributed acoustic objects.

Effect of Middle-Ear Pathology on High-Frequency Ear Canal Reflectance Measurements in the Frequency and Time Domains

01-12-2019 – GR Merchant,JH Siegel,ST Neely,JJ Rosowski,HH Nakajima

Journal Article

Abstract The effects of middle-ear pathology on wideband acoustic immittance and reflectance at frequencies above 6–8 k
Hz have not been documented, nor has the effect of such pathologies on the time-domain reflectance. We describe an approach that utilizes sound frequencies as high as 20 k
Hz and quantifies reflectance in both the frequency and time domains. Experiments were performed with fresh normal human temporal bones before and after simulating various middle-ear pathologies, including malleus fixation, stapes fixation, and disarticulation. In addition to experimental data, computational modeling was used to obtain fitted parameter values of middle-ear elements that vary systematically due to the simulated pathologies and thus may have diagnostic implications. Our results demonstrate that the time-domain reflectance, which requires acoustic measurements at high frequencies, varies with middle-ear condition. Furthermore, the extended bandwidth frequency-domain reflectance data was used to estimate parameters in a simple model of the ear canal and middle ear that separates three major conductive pathologies from each other and from the normal state.

Quantitative Assessment of Anti-Gravity Reflexes to Evaluate Vestibular Dysfunction in Rats

01-12-2019 – V Martins-Lopes,A Bellmunt,EA Greguske,AF Maroto,P Boadas-Vaello,J Llorens

Journal Article

Abstract The tail-lift reflex and the air-righting reflex are anti-gravity reflexes in rats that depend on vestibular function. To obtain objective and quantitative measures of performance, we recorded these reflexes with slow-motion video in two experiments. In the first experiment, vestibular dysfunction was elicited by acute exposure to 0 (control), 400, 600, or 1000 mg/kg of 3,3′-iminodipropionitrile (IDPN), which causes dose-dependent hair cell degeneration. In the second, rats were exposed to sub-chronic IDPN in the drinking water for 0 (control), 4, or 8 weeks; this causes reversible or irreversible loss of vestibular function depending on exposure time. In the tail-lift test, we obtained the minimum angle defined during the lift and descent maneuver by the nose, the back of the neck, and the base of the tail. In the air-righting test, we obtained the time to right the head. We also obtained vestibular dysfunction ratings (VDRs) using a previously validated behavioral test battery. Each measure, VDR, tail-lift angle, and air-righting time demonstrated dose-dependent loss of vestibular function after acute IDPN and time-dependent loss of vestibular function after sub-chronic IDPN. All measures showed high correlations between each other, and maximal correlation coefficients were found between VDRs and tail-lift angles. In scanning electron microscopy evaluation of the vestibular sensory epithelia, the utricle and the saccule showed diverse pathological outcomes, suggesting that they have a different role in these reflexes. We conclude that these anti-gravity reflexes provide useful objective and quantitative measures of vestibular function in rats that are open to further development.

Pitch Matching Adapts Even for Bilateral Cochlear Implant Users with Relatively Small Initial Pitch Differences Across the Ears

01-12-2019 – JM Aronoff,HE Staisloff,A Kirchner,DH Lee,J Stelmach

Journal Article

Abstract There is often a mismatch for bilateral cochlear implant (CI) users between the electrodes in the two ears that receive the same frequency allocation and the electrodes that, when stimulated, yield the same pitch. Studies with CI users who have extreme mismatches between the two ears show that adaptation occurs in terms of pitch matching, reducing the difference between which electrodes receive the same frequency allocation and which ones produce the same pitch. The considerable adaptation that occurs for these extreme cases suggests that adaptation should be sufficient to overcome the relatively minor mismatches seen with typical bilateral CI users. However, even those with many years of bilateral CI use continue to demonstrate a mismatch. This may indicate that adaptation only occurs when there are large mismatches. Alternatively, it may indicate that adaptation occurs regardless of the magnitude of the mismatch, but that adaptation is proportional to the magnitude of the mismatch, and thus never fully counters the original mismatch. To investigate this, six bilateral CI users with initial pitch-matching mismatches of less than 3 mm completed a pitch-matching task near the time of activation, 6 months after activation, and 1 year after activation. Despite relatively small initial mismatches, the results indicated that adaptation still occurred.

Spectral and Temporal Envelope Cues for Human and Automatic Speech Recognition in Noise

22-11-2019 – G Hu,SC Determan,Y Dong,AT Beeve,JE Collins,Y Gai

Journal Article

Abstract Acoustic features of speech include various spectral and temporal cues. It is known that temporal envelope plays a critical role for speech recognition by human listeners, while automated speech recognition (ASR) heavily relies on spectral analysis. This study compared sentence-recognition scores of humans and an ASR software, Dragon, when spectral and temporal-envelope cues were manipulated in background noise. Temporal fine structure of meaningful sentences was reduced by noise or tone vocoders. Three types of background noise were introduced: a white noise, a time-reversed multi-talker noise, and a fake-formant noise. Spectral information was manipulated by changing the number of frequency channels. With a 20-d
B signal-to-noise ratio (SNR) and four vocoding channels, white noise had a stronger disruptive effect than the fake-formant noise. The same observation with 22 channels was made when SNR was lowered to 0 d
B. In contrast, ASR was unable to function with four vocoding channels even with a 20-d
B SNR. Its performance was least affected by white noise and most affected by the fake-formant noise. Increasing the number of channels, which improved the spectral resolution, generated non-monotonic behaviors for the ASR with white noise but not with colored noise. The ASR also showed highly improved performance with tone vocoders. It is possible that fake-formant noise affected the software’s performance by disrupting spectral cues, whereas white noise affected performance by compromising speech segmentation. Overall, these results suggest that human listeners and ASR utilize different listening strategies in noise.

A Non-linear Viscoelastic Model of the Incudostapedial Joint

06-11-2019 – M Soleimani,WRJ Funnell,WF Decraemer

Journal Article

Abstract The ossicular joints of the middle ear can significantly affect middle-ear function, particularly under conditions such as high-intensity sound pressures or high quasi-static pressures. Experimental investigations of the mechanical behaviour of the human incudostapedial joint have shown strong non-linearity and asymmetry in tension and compression tests, but some previous finite-element models of the joint have had difficulty replicating such behaviour. In this paper, we present a finite-element model of the joint that can match the asymmetry and non-linearity well without using different model structures or parameters in tension and compression. The model includes some of the detailed structures of the joint seen in histological sections. The material properties are found from the literature when available, but some parameters are calculated by fitting the model to experimental data from tension, compression and relaxation tests. The model can predict the hysteresis loops of loading and unloading curves. A sensitivity analysis for various parameters shows that the geometrical parameters have substantial effects on the joint mechanical behaviour. While the joint capsule affects the tension curve more, the cartilage layers affect the compression curve more.

Rapamycin Protects Spiral Ganglion Neurons from Gentamicin-Induced Degeneration In Vitro

01-10-2019 – S Guo,N Xu,P Chen,Y Liu,X Qi,S Liu,C Li,J Tang

Journal Article

Abstract Gentamicin, one of the most widely used aminoglycoside antibiotics, is known to have toxic effects on the inner ear. Taken up by cochlear hair cells and spiral ganglion neurons (SGNs), gentamicin induces the accumulation of reactive oxygen species (ROS) and initiates apoptosis or programmed cell death, resulting in a permanent and irreversible hearing loss. Since the survival of SGNs is specially required for cochlear implant, new procedures that prevent SGN cell loss are crucial to the success of cochlear implantation. ROS modulates the activity of the mammalian target of rapamycin (m
TOR) signaling pathway, which mediates apoptosis or autophagy in cells of different organs. However, whether m
TOR signaling plays an essential role in the inner ear and whether it is involved in the ototoxic side effects of gentamicin remain unclear. In the present study, we found that gentamicin induced apoptosis and cell loss of SGNs in vivo and significantly decreased the density of SGN and outgrowth of neurites in cultured SGN explants. The phosphorylation levels of ribosomal S6 kinase and elongation factor 4E binding protein 1, two critical kinases in the m
TOR complex 1 (m
TORC1) signaling pathway, were modulated by gentamicin application in the cochlea. Meanwhile, rapamycin, a specific inhibitor of m
TORC1, was co-applied with gentamicin to verify the role of m
TOR signaling. We observed that the density of SGN and outgrowth of neurites were significantly increased by rapamycin treatment. Our finding suggests that m
TORC1 is hyperactivated in the gentamicin-induced degeneration of SGNs, and rapamycin promoted SGN survival and outgrowth of neurites.

Pre-operative Brain Imaging Using Functional Near-Infrared Spectroscopy Helps Predict Cochlear Implant Outcome in Deaf Adults

01-10-2019 – CA Anderson,IM Wiggins,PT Kitterick,DEH Hartley

Journal Article

Abstract Currently, it is not possible to accurately predict how well a deaf individual will be able to understand speech when hearing is (re)introduced via a cochlear implant. Differences in brain organisation following deafness are thought to contribute to variability in speech understanding with a cochlear implant and may offer unique insights that could help to more reliably predict outcomes. An emerging optical neuroimaging technique, functional near-infrared spectroscopy (f
NIRS), was used to determine whether a pre-operative measure of brain activation could explain variability in cochlear implant (CI) outcomes and offer additional prognostic value above that provided by known clinical characteristics. Cross-modal activation to visual speech was measured in bilateral superior temporal cortex of pre- and post-lingually deaf adults before cochlear implantation. Behavioural measures of auditory speech understanding were obtained in the same individuals following 6 months of cochlear implant use. The results showed that stronger pre-operative cross-modal activation of auditory brain regions by visual speech was predictive of poorer auditory speech understanding after implantation. Further investigation suggested that this relationship may have been driven primarily by the inclusion of, and group differences between, pre- and post-lingually deaf individuals. Nonetheless, pre-operative cortical imaging provided additional prognostic value above that of influential clinical characteristics, including the age-at-onset and duration of auditory deprivation, suggesting that objectively assessing the physiological status of the brain using f
NIRS imaging pre-operatively may support more accurate prediction of individual CI outcomes. Whilst activation of auditory brain regions by visual speech prior to implantation was related to the CI user’s clinical history of deafness, activation to visual speech did not relate to the future ability of these brain regions to respond to auditory speech stimulation with a CI. Greater pre-operative activation of left superior temporal cortex by visual speech was associated with enhanced speechreading abilities, suggesting that visual speech processing may help to maintain left temporal lobe specialisation for language processing during periods of profound deafness.

Osteoclasts Modulate Bone Erosion in Cholesteatoma via RANKL Signaling

01-10-2019 – R Imai,T Sato,Y Iwamoto,Y Hanada,M Terao,Y Ohta,Y Osaki,T Imai,T Morihana,S Okazaki,K Oshima,D Okuzaki,I Katayama,H Inohara

Journal Article

Abstract Cholesteatoma starts as a retraction of the tympanic membrane and expands into the middle ear, eroding the surrounding bone and causing hearing loss and other serious complications such as brain abscess and meningitis. Currently, the only effective treatment is complete surgical removal, but the recurrence rate is relatively high. In rheumatoid arthritis (RA), osteoclasts are known to be responsible for bone erosion and undergo differentiation and activation by receptor activator of NF-κB ligand (RANKL), which is secreted by synovial fibroblasts, T cells, and B cells. On the other hand, the mechanism of bone erosion in cholesteatoma is still controversial. In this study, we found that a significantly larger number of osteoclasts were observed on the eroded bone adjacent to cholesteatomas than in unaffected areas, and that fibroblasts in the cholesteatoma perimatrix expressed RANKL. We also investigated upstream transcription factors of RANKL using RNA sequencing results obtained via Ingenuity Pathways Analysis, a tool that identifies relevant targets in molecular biology systems. The concentrations of four candidate factors, namely interleukin-1β, interleukin-6, tumor necrosis factor α, and prostaglandin E2, were increased in cholesteatomas compared with normal skin. Furthermore, interleukin-1β was expressed in infiltrating inflammatory cells in the cholesteatoma perimatrix. This is the first report demonstrating that a larger-than-normal number of osteoclasts are present in cholesteatoma, and that the disease involves upregulation of factors related to osteoclast activation. Our study elucidates the molecular basis underlying bone erosion in cholesteatoma.

Morphological Immaturity of the Neonatal Organ of Corti and Associated Structures in Humans

01-10-2019 – SWF Meenderink,CA Shera,MD Valero,MC Liberman,C Abdala

Journal Article

Abstract Although anatomical development of the cochlear duct is thought to be complete by term birth, human newborns continue to show postnatal immaturities in functional measures such as otoacoustic emissions (OAEs). Some of these OAE immaturities are no doubt influenced by incomplete maturation of the external and middle ears in infants; however, the observed prolongation of distortion-product OAE phase-gradient delays in newborns cannot readily be explained by conductive factors. This functional immaturity suggests that the human cochlea at birth may lack fully adult-like traveling-wave motion. In this study, we analyzed temporal-bone sections at the light microscopic level in newborns and adults to quantify dimensions and geometry of cochlear structures thought to influence the mechanical response of the cochlea. Contrary to common belief, results show multiple morphological immaturities along the length of the newborn spiral, suggesting that important refinements in the size and shape of the sensory epithelium and associated structures continue after birth. Specifically, immaturities of the newborn basilar membrane and organ of Corti are consistent with a more compliant and less massive cochlear partition, which could produce longer DPOAE delays and a shifted frequency-place map in the neonatal ear.

Human Click-Based Echolocation of Distance: Superfine Acuity and Dynamic Clicking Behaviour

01-10-2019 – L Thaler,HPJC De Vos,D Kish,M Antoniou,CJ Baker,MCJ Hornikx

Journal Article

Abstract Some people who are blind have trained themselves in echolocation using mouth clicks. Here, we provide the first report of psychophysical and clicking data during echolocation of distance from a group of 8 blind people with experience in mouth click-based echolocation (daily use for > 3 years). We found that experienced echolocators can detect changes in distance of 3 cm at a reference distance of 50 cm, and a change of 7 cm at a reference distance of 150 cm, regardless of object size (i.e. 28.5 cm vs. 80 cm diameter disk). Participants made mouth clicks that were more intense and they made more clicks for weaker reflectors (i.e. same object at farther distance, or smaller object at same distance), but number and intensity of clicks were adjusted independently from one another. The acuity we found is better than previous estimates based on samples of sighted participants without experience in echolocation or individual experienced participants (i.e. single blind echolocators tested) and highlights adaptation of the perceptual system in blind human echolocators. Further, the dynamic adaptive clicking behaviour we observed suggests that number and intensity of emissions serve separate functions to increase SNR. The data may serve as an inspiration for low-cost (i.e. non-array based) artificial ‘cognitive’ sonar and radar systems, i.e. signal design, adaptive pulse repetition rate and intensity. It will also be useful for instruction and guidance for new users of echolocation.

Cortical Auditory Evoked Potentials in Response to Frequency Changes with Varied Magnitude, Rate, and Direction

01-10-2019 – BMD Vonck,MJW Lammers,M van der Waals,GA van Zanten,H Versnel

Journal Article

Abstract Recent literature on cortical auditory evoked potentials has focused on correlations with hearing performance with the aim to develop an objective clinical tool. However, cortical responses depend on the type of stimulus and choice of stimulus parameters. This study investigates cortical auditory evoked potentials to sound changes, so-called acoustic change complexes (ACC), and the effects of varying three stimulus parameters. In twelve normal-hearing subjects, ACC waveforms were evoked by presenting frequency changes with varying magnitude, rate, and direction. The N1 amplitude and latency were strongly affected by magnitude, which is known from the literature. Importantly, both of these N1 variables were also significantly affected by both rate and direction of the frequency change. Larger and earlier N1 peaks were evoked by increasing the magnitude and rate of the frequency change and with downward rather than upward direction of the frequency change. The P2 amplitude increased with magnitude and depended, to a lesser extent, on rate of the frequency change while direction had no effect on this peak. The N1–P2 interval was not affected by any of the stimulus parameters. In conclusion, the ACC is most strongly affected by magnitude and also substantially by rate and direction of the change. These stimulus dependencies should be considered in choosing stimuli for ACCs as objective clinical measure of hearing performance.

Investigating the Effect of Cochlear Synaptopathy on Envelope Following Responses Using a Model of the Auditory Nerve

01-08-2019 – G Encina-Llamas,JM Harte,T Dau,B Shinn-Cunningham,B Epp

Journal Article

Abstract The healthy auditory system enables communication in challenging situations with high levels of background noise. Yet, despite normal sensitivity to pure tones, many listeners complain about having difficulties in such situations. Recent animal studies demonstrated that noise overexposure that produces temporary threshold shifts can cause the loss of auditory nerve (AN) fiber synapses (i.e., cochlear synaptopathy, CS), which appears to predominantly affect medium- and low-spontaneous rate (SR) fibers. In the present study, envelope following response (EFR) magnitude-level functions were recorded in normal hearing (NH) threshold and mildly hearing-impaired (HI) listeners with thresholds elevated above 2 k
Hz. EFRs were elicited by sinusoidally amplitude modulated (SAM) tones presented in quiet with a carrier frequency of 2 k
Hz, modulated at 93 Hz, and modulation depths of 0.85 (deep) and 0.25 (shallow). While EFR magnitude-level functions for deeply modulated tones were similar for all listeners, EFR magnitudes for shallowly modulated tones were reduced at medium stimulation levels in some NH threshold listeners and saturated in all HI listeners for the whole level range. A phenomenological model of the AN was used to investigate the extent to which hair-cell dysfunction and/or CS could explain the trends observed in the EFR data. Hair-cell dysfunction alone, including postulated elevated hearing thresholds at extended high frequencies (EHF) beyond 8 k
Hz, could not account for the recorded EFR data. Postulated CS led to simulations generally consistent with the recorded data, but a loss of all types of AN fibers was required within the model framework. The effects of off-frequency contributions (i.e., away from the characteristic place of the stimulus) and the differential loss of different AN fiber types on EFR magnitude-level functions were analyzed. When using SAM tones in quiet as the stimulus, model simulations suggested that (1) EFRs are dominated by the activity of high-SR fibers at all stimulus intensities, and (2) EFRs at medium-to-high stimulus levels are dominated by off-frequency contributions.

A Site-Selection Strategy Based on Polarity Sensitivity for Cochlear Implants: Effects on Spectro-Temporal Resolution and Speech Perception

01-08-2019 – T Goehring,A Archer-Boyd,JM Deeks,JG Arenberg,RP Carlyon

Journal Article

ABSTRACT Thresholds of asymmetric pulses presented to cochlear implant (CI) listeners depend on polarity in a way that differs across subjects and electrodes. It has been suggested that lower thresholds for cathodic-dominant compared to anodic-dominant pulses reflect good local neural health. We evaluated the hypothesis that this polarity effect (PE) can be used in a site-selection strategy to improve speech perception and spectro-temporal resolution. Detection thresholds were measured in eight users of Advanced Bionics CIs for 80-pps, triphasic, monopolar pulse trains where the central high-amplitude phase was either anodic or cathodic. Two experimental MAPs were then generated for each subject by deactivating the five electrodes with either the highest or the lowest PE magnitudes (cathodic minus anodic threshold). Performance with the two experimental MAPs was evaluated using two spectro-temporal tests (Spectro-Temporal Ripple for Investigating Processor Effectivenes
S (STRIPES; Archer-Boyd et al. in J Acoust Soc Am 144:2983–2997, 2018) and Spectral-Temporally Modulated Ripple Test (SMRT; Aronoff and Landsberger in J Acoust Soc Am 134:EL217–EL222, 2013)) and with speech recognition in quiet and in noise. Performance was also measured with an experimental MAP that used all electrodes, similar to the subjects’ clinical MAP. The PE varied strongly across subjects and electrodes, with substantial magnitudes relative to the electrical dynamic range. There were no significant differences in performance between the three MAPs at group level, but there were significant effects at subject level—not all of which were in the hypothesized direction—consistent with previous reports of a large variability in CI users’ performance and in the potential benefit of site-selection strategies. The STRIPES but not the SMRT test successfully predicted which strategy produced the best speech-in-noise performance on a subject-by-subject basis. The average PE across electrodes correlated significantly with subject age, duration of deafness, and speech perception scores, consistent with a relationship between PE and neural health. These findings motivate further investigations into site-specific measures of neural health and their application to CI processing strategies.

The fMRI Data of Thompson et al. (2006) Do Not Constrain How the Human Midbrain Represents Interaural Time Delay

01-08-2019 – RM Stern,HS Colburn,LR Bernstein,C Trahiotis

Journal Article

Abstract This commentary provides an alternate interpretation of the f
MRI data that were presented in a communication to the journal Nature Neuroscience (Thompson et al., Nat. Neurosci. 9: 1096–1098, 2006 ). The authors argued that their observations demonstrated that traditional models of binaural hearing which incorporate “internal delays,” such as the coincidence-counting mechanism proposed by Jeffress and quantified by Colburn, are invalid, and that a new model for human interaural time delay processing must be developed. We argue that the f
MRI data presented do not strongly favor either the refutation or the retention of the traditional models, although they may be useful in constraining the physiological sites of various processing stages. The conclusions of Thompson et al. are based on the locations of maximal activity in the midbrain in response to selected binaural signals. These locations are inconsistent with well-known perceptual attributes of the stimuli under consideration, as is noted by the authors, which suggests that further processing is involved in forming the percept of subjective lateral position.

Exploring the Role of Medial Olivocochlear Efferents on the Detection of Amplitude Modulation for Tones Presented in Noise

01-08-2019 – M Wojtczak,AM Klang,NT Torunsky

Journal Article

Abstract The medial olivocochlear reflex has been hypothesized to improve the detection and discrimination of dynamic signals in noisy backgrounds. This hypothesis was tested here by comparing behavioral outcomes with otoacoustic emissions. The effects of a precursor on amplitude-modulation (AM) detection were measured for a 1- and 6-k
Hz carrier at levels of 40, 60, and 80 d
B SPL in a two-octave-wide noise masker with a level designed to produce poor, but above-chance, performance. Three types of precursor were used: a two-octave noise band, an inharmonic complex tone, and a pure tone. Precursors had the same overall level as the simultaneous noise masker that immediately followed the precursor. The noise precursor produced a large improvement in AM detection for both carrier frequencies and at all three levels. The complex tone produced a similarly large improvement in AM detection at the highest level but had a smaller effect for the two lower carrier levels. The tonal precursor did not significantly affect AM detection in noise. Comparisons of behavioral thresholds and medial olivocochlear efferent effects on stimulus frequency otoacoustic emissions measured with similar stimuli did not support the hypothesis that efferent-based reduction of cochlear responses contributes to the precursor effects on AM detection.

Evaluating Psychophysical Polarity Sensitivity as an Indirect Estimate of Neural Status in Cochlear Implant Listeners

01-08-2019 – KN Jahn,JG Arenberg

Journal Article

Abstract The physiological integrity of spiral ganglion neurons is presumed to influence cochlear implant (CI) outcomes, but it is difficult to measure neural health in CI listeners. Modeling data suggest that, when peripheral processes have degenerated, anodic stimulation may be a more effective neural stimulus than cathodic stimulation. The primary goal of the present study was to evaluate the emerging theory that polarity sensitivity reflects neural health in CI listeners. An ideal in vivo estimate of neural integrity should vary independently of other factors known to influence the CI electrode-neuron interface, such as electrode position and tissue impedances. Thus, the present analyses quantified the relationships between polarity sensitivity and (1) electrode position estimated via computed tomography imaging, (2) intracochlear resistance estimated via electrical field imaging, and (3) focused (steered quadrupolar) behavioral thresholds, which are believed to reflect a combination of local neural health, electrode position, and intracochlear resistance. Eleven adults with Advanced Bionics devices participated. To estimate polarity sensitivity, electrode-specific behavioral thresholds in response to monopolar, triphasic pulses where the central high-amplitude phase was either anodic (CAC) or cathodic (ACA) were measured. The polarity effect was defined as the difference in threshold response to the ACA compared to the CAC stimulus. Results indicated that the polarity effect was not related to electrode-to-modiolus distance, electrode scalar location, or intracochlear resistance. Large, positive polarity effects, which may indicate SGN degeneration, were associated with relatively high focused behavioral thresholds. The polarity effect explained a significant portion of the variation in focused thresholds, even after controlling for electrode position and intracochlear resistance. Overall, these results provide support for the theory that the polarity effect may reflect neural integrity in CI listeners. Evidence from this study supports further investigation into the use of polarity sensitivity for optimizing individual CI programming parameters.