UMN Auditory Perception and Cognition Lab

Time: September 2017 - present

PI: Andrew Oxenham

I joined the Auditory Perception and Cognition Lab in September 2017. My research spans a variety of topics in psychoacoustics and computational modeling of the auditory system.

Neural basis of pitch perception

My PhD focused on investigating pitch perception with harmonic complex tones composed of low-numbered but high-frequency harmonics (e.g., harmonics 6-10 of a 1400 Hz F0). Previous research demonstrated that accurate pitch perception is possible with these tones despite the fact that they do not elicit strong phase locking to temporal fine structure in the auditory nerve. My goal to elucidate the neural code that underlies this phenomenon. To this end, I am combined psychophysical methods, computational models of the auditory periphery, and ideas from statistical estimation theory (e.g., Cramér–Rao lower bound) to probe the ability of listeners to perceive the pitch of complex tones at high frequencies.

Profile analysis

Another topic I have pursued is profile analysis. In profile-analysis tasks, listeners are asked to identify when one component of a complex sound is incremented in level, even while the sound’s overall level is randomized from interval to interval. I have been exploring how representations of profile-analysis stimuli in the auditory nerve and inferior colliculus, using phenomenological models of the auditory system, relate to psychophysical performance in the task at low and high frequencies.

Role of pitch in the complex auditory scene

In my first project in the lab, I examined how F0 differences between a target talker and harmonic complex tone masker benefit speech segregation under a variety of conditions. This research provided novel insights into how listeners can “glimpse” target harmonics between resolved masker harmonics. Most notably, the results pose an interesting challenge for those attempting to build “cancellation” models of F0-based speech segregation.

UMN Computational Visual Neuroscience Lab

Time: March 2019 - present

PI: Kendrick Kay

As part of my NSF-NRT Graduate Training Program in Sensory Science Fellowship, I am currently pursuing a research project in the Computational Visual Neuroscience Lab. In this project, I am analyzing data from the Natural Scenes Dataset, a massive multi-session, multi-subject, high-resolution fMRI dataset collected at the Center for Magnetic Resonance Research. Specifically, I am using encoding models and thalamocortical correlation analyses to explore the functional organization of the human pulvinar.

Eriksholm Research Center

Time: May 2019 - August 2019

PI: Lars Bramsløw

During the summer of 2019, I worked as a research intern at Oticon’s Eriksholm Research Center. At Eriskholm, I researched tools and techniques for visualizing and interpreting deep neural networks designed to process and separate speech.

UT Dallas Speech Perception Lab

Time: May 2015 - May 2017

PI: Peter Assmann

As an undergraduate research assistant, my primary role was working on a project investigating perception of indexical properties in children’s speech. Using stimuli from the North Texas Voxel Database modified by the STRAIGHT vocoder, I examined how listeners utilize fundamental frequency and formant frequencies when perceiving age and gender in children’s speech. In particular, I focused on conditions of reduced spectrotemporal resolution (through the use of tone vocoders) and on differences between normal-hearing and cochlear-implant listeners.