Neuroscientist researchers at The University of Texas at Dallas have found that prolonged exposure to loud noise alters how the human brain processes speech, potentially increasing difficulty distinguishing speech sounds.
In a paper published recently in the Journal Ear and Hearing, entitled, “Behavioral and Neural Discrimination of Speech Sounds After Moderate or Intense Noise Exposure in Rats,” researchers demonstrated for the first time how noise-induced hearing loss affects the brains recognition of speech sounds.
Noise-induced hearing loss (NIHL) spans the demographic spectrum, affecting an estimated 15 percent of Americans between the ages of 20 and 69, according to the National Institute of Deafness and Other Communication Disorders (NIDCD), a branch of the National Institutes of Health (NIH) that conducts and supports research pertaining to normal and disordered processes of hearing, balance, taste, smell, voice, speech, and language. The NIDCD is celebrating its 25 anniversary of improving the lives of people with communication disorders this year, and notes that exposure to intensely loud sounds leads to permanent damage of the stereocilia — mechanosensing organelles of hair cells, that act as sound receivers In the inner ear. Once damaged, the stereocilia cells do not grow back, leading to NIHL.
“As we have made machines and electronic devices more powerful, the potential to cause permanent damage has grown tremendously,” says Dr. Michael Kilgard, co-author, and Margaret Fonde Jonsson, Professor of Neuroscience in the UT Dallas School of Behavioral and Brain Sciences. “Even the smaller MP3 players can reach volume levels that are highly damaging to the ear in a matter of minutes.”
A UT Dallas release notes that Dr. Kilgard focuses on conducting neuroscience research that can be translated into clinical settings. Research into using vagus nerve stimulation to treat tinnitus and stroke recovery has yielded initial success toward this goal. His early work involved exploring how the brain understands sounds and developing methods for manipulating this encoding. He continues this work but also pursues research to improve treatments for stroke, traumatic brain injuries, autism and the phantom sounds associated with tinnitus.
“My motivation is to conduct experiments with the greatest likelihood of translating the findings to human treatments for neurological and psychiatric disorders,” Dr. Kilgard says. “A billion people worldwide have one of these conditions, and right now we mostly treat the symptoms. In the future, it will be possible to treat the underlying disease mechanism.”
Dr. Kilgard has received multiple grants totaling several million dollars from the National Institute on Deafness and Other Communication Disorders and the National Institute for Neurological Disorders and Stroke. His 1998 Science paper, “Cortical map reorganization enabled by nucleus basalis activity,” remains the most highly cited paper confirming the brains ability to change with experiences. Dr. Kilgard began teaching at UT Dallas in 1999 as an assistant professor, and was promoted to full professor in 2011.
He explains that his research interests relate to the general principles that underlie the remarkable self-organizing capability of the cerebral cortex, a continual process that optimizes its function to suit the environment an individual occupies. He notes that although cellular studies have demonstrated that plasticity mechanisms are dependent on correlation-based rules, we still do not understand the principles that govern how sensory experience alters the distributed responses of thousands of cortical neurons in a behaviorally useful manner.
The primary research objective in Dr. Kilgard’s UT Dallas laboratory is to understand how experience rewires the brain, and he and his team are particularly interested in how experience with speech sounds alters the brain. The researchers are presently combining speech training and environmental enrichment with pharmacological and electrical stimulation of the nervous system to develop a general theory of neural plasticity. Understanding the network-level changes that allow the brain to adapt to new situations and learn novel stimuli will aid in the development of new treatment strategies for neurological disorders including dyslexia, autism, and stroke.
Before this recent study, scientists had not clearly understood the direct effects of NIHL on how the brain responds to speech.
To simulate two types of noise trauma that clinical populations face, UT Dallas scientists exposed rats to moderate or intense levels of noise for an hour. One group heard a high-frequency noise at 115 decibels inducing moderate hearing loss, and a second group heard a low-frequency noise at 124 decibels causing severe hearing loss.
For reference, the American Speech-Language-Hearing Association lists the maximum output of an MP3 player or the sound of a chain saw at about 110 decibels and the siren on an emergency vehicle at 120 decibels. Regular exposure to sounds greater than 100 decibels for more than a minute at a time may lead to permanent hearing loss, according to the NIDCD.
The UT Dallas release notes that the researchers observed how the two types of hearing loss affected speech sound processing in the rats by recording the neuronal response in the auditory cortex a month after the noise exposure. The auditory cortex, one of the main areas that processes sounds in the brain, is organized on a scale, like a piano. Neurons at one end of the cortex respond to low-frequency sounds, while other neurons at the opposite end react to higher frequencies.
In the group with severe hearing loss, fewer than one-third of the tested auditory cortex sites that normally respond to sound reacted to stimulation. In the sites that did respond, there were unusual patterns of activity. The neurons reacted slower, the sounds had to be louder and the neurons responded to frequency ranges narrower than normal. Additionally, the rats could not tell the speech sounds apart in a behavioral task they could successfully complete before the hearing loss.
In the group with moderate hearing loss, the area of the cortex responding to sounds didn’t change, but the neurons’ reaction did. A larger area of the auditory cortex responded to low-frequency sounds. Neurons reacting to high frequencies needed more intense sound stimulation and responded slower than those in normal hearing animals. Despite these changes, the rats were still able to discriminate the speech sounds in a behavioral task.
“Although the ear is critical to hearing, it is just the first step of many processing stages needed to hold a conversation,” Dr. Kilgard notes. “We are beginning to understand how hearing damage alters the brain and makes it hard to process speech, especially in noisy environments.
The research was funded through a grant from NIDCD. Other UT Dallas researchers involved in the study were Dr. Amanda Reed, Dr. Tracy Centanni, Michael Borland, Chanel Matney and Dr. Crystal Engineer.
University of Texas at Dallas
National Institute of Deafness and Other Communication Disorders (NIDCD)
American Speech-Language-Hearing Association
University of Texas at Dallas