An article from The Times of London on how the brain is able to process sounds.
The article, by neuroscientist Dr Simon Blackburn, is a bit of a cheat: it’s not really about the brain, but about how it’s built.
Dr Blackburn and his team at the University of Manchester used a technique called electrophysiology to examine the brains of animals, including the brain of a labrador retriever.
They found that the brain was able to distinguish between sounds with and without a high frequency.
“This suggests that the human brain is sensitive to sounds with high frequency and is capable of distinguishing between sounds without high frequency,” said Dr Blackburn.
“It is an area of research that is still very young, but we are definitely on the way to understand more about the human mind.”
Dr Blackburn’s findings are in line with the theory of sound-detection technology called “optical sound processing”.
This uses the electromagnetic energy in the sound to make tiny electrical traces on the skin of the ears and nose, allowing the brain to identify whether the sound is high or low.
For example, a high-frequency sound may make a tiny electrical trace on the ear canal.
For a low-frequency signal, the same signal can cause a tiny, tiny electrical mark on the surface of the ear.
The brain is thought to be able to recognize these signals because it is capable the ability to detect them in the first place.
But the sound-recognition system is still in its infancy, Dr Blackburn said.
It is still unknown exactly how the human auditory system works, but the research showed it to be much more complex than previous theories suggested.
For one thing, there are so many different sounds that the signal is not enough to tell if the sound comes from one of them, said Dr Stephen Jones, a researcher in the department of cognitive neuroscience at University College London.
For another, the brain uses different processing strategies depending on what is happening at the time.
“The brain uses various forms of inference,” he said.
For instance, if a sound is coming from a high level and a low level, the system uses the “high” form of inference to know it’s coming from something at a high or a low frequency.
The same goes for other low-level cues such as pitch.
If the high-level sound is very similar to what is going on in the background, the low-loudness form of the inference would be used to identify it.
Dr Jones said the brain could be able improve its accuracy by “learning” to detect sounds that are low- or high-louder.
For this, it needs to learn what it is hearing, he said, rather than relying on its experience of high- and low-intensity sounds.
“These are sounds that it has never heard before and that are associated with particular patterns in the brain,” Dr Jones said.
Dr Stephen Jones is a researcher at the department for cognitive neuroscience and cognitive neuroscience research at the university.
He said that even though the research was done in humans, the findings might be useful for other animals.
“I think this study is going to help us to see whether we can develop better technology for the human ear,” he told The Sunday Telegraph.
“In the future, you might be able use it in animals, to learn more about how the auditory system is working in other animals.”
Topics:human-interest,science-and-technology,art-and–design,psychology,arts-and‑entertainment,london-metropolitan-7210,united-kingdom,uk