A new astounding study has discovered that somatosensory input takes place in neural processing of speech. In other words the perceptions of speech sounds differ with the varying stretch of the facial skin. This study was led by McGill University.
The investigators state that the way different words are viewed is based on the varying methods of facial skin stretch. The investigators engineered a robotic device to ape the facial skin in a way akin to the way humans experience while talking.
This study was conducted by David Ostry, McGill neuroscientist, Department of Psychology, and colleagues from the Haskins Laboratories and Research Laboratories of Electronics, Massachusetts Institute of Technology. Around 75 native speakers of American English were chosen for this study. They were made to listen to words one at a time that were chosen from a computer-produced continuum between the words “head†and “had.†The results revealed that, stretching of the skin gave out different sounds to the word.
When the skin was stretched upwards, the word sounded as “head,†on the other hand while stretching downward the word sounded as “had.†However a backward stretch did not seem to have any perceptual effect. Thus illustrating that stretching of the skin gave out different sounds to the word.
The investigators came to the conclusion that the subjects were influenced by their choices from the way the facial skin was stretched. In other words, the way the subjects based their answers on the way the facial skin was stretched.
The study authors state that their study findings highlight the way the brain processes speech. They say that through this study it is revealed that perception has neural connections to the methodology of speech construction.
Their findings are published in the Proceedings of the National Academy of Sciences issue.