From 2009-2018, I was a researcher in linguistics, specializing in:
- phonetics (the study of the perception and production of speech sounds),
- psycholinguistics (the study of real-time language processing, and how language and cognition affect one another), and
- neurolinguistics (the study of the structure and function of the brain as it affects language).
This page is an overview of some of my projects during that time. You can find a full list of my published work here. A handful of these projects have example code hosted on GitHub.
Speech biomarkers of mental illness
Collaborators: Vijay Mittal, Matt Goldrick, Joseph Keshet
Can speech be used as a diagnostic tool in clinical psychology? Psychotic disorders like schizophrenia impact motor control, affecting posture and head and limb movement. There’s also some evidence for motor/motion disruptions in young adults who are at risk for these disorders, but haven’t yet been diagnosed. Because speaking depends on a high degree of motor control, it’s plausible that these motor disruptions also impact speech. If so, the voice may represent an early warning, non-invasive tool to detect psychosis risk in young adults.
- Click for a public-friendly intro video to the topic
- Sichlinger, L., Cibelli, E., Goldrick, M., & Mittal, V. A. (2019). Clinical correlates of aberrant conversational turn-taking in youth at clinical high-risk for psychosis. Schizophrenia Research, 204, 419.
- Vargas, T., Osborne, K. J., Cibelli, E. S., & Mittal, V. A. (2019). Separating hearing sensitivity from auditory perceptual abnormalities in clinical high risk (CHR) youth. Schizophrenia Research, 204, 437-438.
- NIH 1R21MH119677-01A1
Acquiring speech sounds in a new language
Your native language(s) has a big impact on how you perceive and pronounce speech sounds. In some cases, this can make it hard to acquire sounds in a new language, especially if you start learning as an adult. In my dissertation, I explored some different ways to teach adult learners challenging sounds in a new language, focusing both on auditory training (helping you hear the differences between new sounds) and articulatory training (helping you position your tongue, lips, and larynx to pronounce them).
- Cibelli, E. (2020). Articulatory and perceptual cues to non-native phoneme perception: Cross-modal training for early learners. Second Language Research.
- Cibelli, E. (2020). Training Non-Native Consonant Production with Perceptual and Articulatory Cues. Phonetica 77 (1), 1-28.
Automatic detection of cognitive difficulty
Collaborators: Matt Goldrick, Rhonda (McClain) Mudry, Joseph Keshet, Yossi Adi
Do cognitive challenges (e.g. speaking a second language, being an older speaker) leave detectible signs in speech? This project attempts to answer this by developing and using automatic processing algorithms to detect subtle variations in speech. These subtle hints in pronunciation may indicate that a speaker has a more demanding cognitive task than usual in finding the right word or putting a sentence together.
- Goldrick, M., McClain, R., Cibelli, E., Adi, Y., Gustafson, E., Moers, C., & Keshet, J. (2018). The influence of lexical selection disruptions on articulation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(6), 1107–1141.
- Adi, Y., Keshet, J., Cibelli, E., & Goldrick, M. (2017, March). Sequence segmentation using joint RNN and structured prediction models. In the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2422-2426). IEEE.
- Adi, Y., Keshet, J., Cibelli, E., Gustafson, E., Clopper, C., & Goldrick, M. (2016). Automatic measurement of vowel duration via structured prediction. The Journal of the Acoustical Society of America 140 (6), 4517-4527.
Color perception and categorization
Collaborators: Terry Regier, Yang Xu, Joe Austerweil, Tom Griffiths
How accurate is your perception and memory of physical stimuli? When it comes to color perception, your experience is sometimes (but not always!) affected by the language you speak. Our work in this domain suggests that you are most influenced by the categories in by your native language (“basic” color words like red, blue, and green) when cognitive demands are high. In those circumstances, we see speakers of different languages recalling and classifying the same color in different ways.
The neural pathways of word recognition
Collaborators: Keith Johnson, Eddie Chang, Matt Leonard
Measuring neural activity during language processing is a challenge - in part because it happens so quickly and across wide networks of the brain. A technique called electocorticography (ECoG) allows a rare look into the simultaneous spatial and temporal dimensions of language processing. We used this approach to look at the neural pathways used to process real words and word-like nonsense forms (e.g. “tesolivy”, “piteretion”) millisecond by millisecond, and millimeter by millimeter.
Cibelli, E.S., Leonard, M.K., Johnson, K., & Chang, E.F. (2015). The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening. Brain and Language 147, 66-75.Speech and aging
Collaborators: Susanne Gahl, Kat Hall, Ronald Sprouse
We know that the voice changes throughout childhood, and again late in life. But what happens during the longest span of life - early and middle adulthood? This question is a challenge to study, because it’s rare to have recordings of a person’s voice over many decades. To tackle this challenge, we measured speech from the Up! movies - a documentary series that has followed the same group of people every seven years of their lives. This corpus is freely available for language and speech researchers to use.
Gahl, S., Cibelli, E., Hall, K., and Sprouse, R. (2014). The "Up" corpus: A corpus of speech samples across adulthood. Corpus Linguistics and Linguistic Theory 10(2), 315-328.