We’ve all heard of Heisenberg’s Uncertainty Principle. That puts a fundamental limit on the accuracy with which position and momentum of a particle can be simultaneously known. The more precision for one, the less for the other. There’s a similar idea in acoustics, called the Fourier Uncertainty Principle. Fourier Analysis, a commonly used mathematical method of deconstructing complex waves into their components, is the basis of this uncertainty principle. Unlike Heisenberg’s, it represents not an intrinsic property of the source, but a limit on the capabilities of linear algorithms to analyze it.
It deals with two properties of sound: frequency (or pitch) and timing. If you read music, you know that pitch is the vertical axis and timing the horizontal axis. According to the Fourier Uncertainty Principle, these two properties cannot be simultaneously determined above a limit, called the Gabor limit. This implies that the better two pitches can be distinguished, the less accurately the time between them can be known, and vice versa.
Tell that to the human brain. In a new paper in Physical Review Letters (free download on arXiv), Jacob N. Oppenheim and Marcello O. Magnasco of Rockefeller University tested human subjects and found “Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle.“
The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than 1/(4?). We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple “linear filter” models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing. (Emphasis added.)
PhysicsWorld published a readable summary of this technical paper:
Oppenheim and Magnasco discovered that the accuracy with which the volunteers determined pitch and timing simultaneously was usually much better, on average, than the Gabor limit. In one case, subjects beat the Gabor limit for the product of frequency and time uncertainty by a factor of 50, clearly implying their brains were using a nonlinear algorithm.
That algorithm is encoded in the ear and the brain. Since musicians performed especially well on the tests, it implies that this ability can be improved with training. The article ends with a neuroscientist’s opinion about this ability to outperform human algorithms:
Mike Lewicki, a computational neuroscientist at Case Western Reserve University in Ohio, says the research is “a nice demonstration that our perceptual system is doing complex things – which, of course, people have always known – but this is a nice quantitative demonstration by which, even at the most basic level, using the most straightforward stimuli, you can demonstrate that the auditory system is doing something quite remarkable”.
The ability of the human auditory system to beat the Gabor limit was first discovered in 1970, but was “not picked up by the broader scientific community, partly because cochlear processes were not then understood.” As scientists have learned more in the interim, they are beginning to piece together the mechanisms by which the ear and brain achieve such “hyper acuity.”
The write-up on PhysOrg includes short audio clips where you can test your own ears. It explains the physical basis of our non-linear sound processing:
The researchers think that this superior human listening ability is partly due to the spiral structure and nonlinearities in the cochlea. Previously, scientists have proven that linear systems cannot exceed the time-frequency uncertainty limit. Although most nonlinear systems do not perform any better, any system that exceeds the uncertainty limit must be nonlinear. For this reason, the nonlinearities in the cochlea are likely integral to the precision of human auditory processing.
What’s the implication for intelligent design? It’s another example of “over-design” in nature — an ability that exceeds the requirement for mere survival. Evolutionists might be able to concoct a story that animals and humans needed precision in the timing and pitch of sounds to escape predators, but such a story would not provide a cause for generating the random mutations that resulted in a finely tuned system. It would only imply that without the equipment, the animal would probably not survive. But that’s the issue: how could a blind, purposeless process come up with it in the first place?
Random mutations would have had to occur separately in the cochlea, auditory nerve, and brain without a plan for hyper acuity as the endpoint. As much as that story stretches credulity, the intelligent design explanation makes sense. The only cause we know that can produce integrated high-performance systems like this is intelligence.
Recognizing that humans have a mechanism for distinguishing sounds above the theoretical limit leads to another thought: acoustic engineers might learn from the ear-brain system about how to do it. Oppenheim and Magnasco end their paper with that tantalizing idea:
Elucidation of which mechanism underlies our subjects auditory hyper acuity is likely to have wide-ranging applications, both in fields where matching human performance is an issue, such as speech recognition, as well as those more removed, such as radar, sonar and radio astronomy.
The study of human hearing, therefore, weaves seamlessly into biomimetics, a field inspired by intelligent design in nature. Who would have thought that radio astronomers could benefit from the study of human hearing? New Scientist comments, now that we know “Musical brains smash audio algorithm limits,” that means “it should be possible to improve upon today’s gold-standard methods for audio perception.”
From eardrum to algorithm, this is an intelligent-design story all the way.
Photo credit: Eric Casequin.