So it’s snowing in Durham, and snow’s what we’ve got.* And power, too! And as it happens, we have exciting ideas regarding a deep curiosity about sound vibrations. One of the ideas I am interested in exploring is sculpting sound in relation to the acoustics of a room. Sound engineers use acoustics when producing live or recorded sound to make sure that each musical voice in the room is heard, that the voices don’t get in each others way, and that each voice is in the optimal sonic relationship to the other voices. I don’t know how deeply sound engineers get into the actual mathematical analysis of the resonance of each room they set up in. It seems that most of the work is done by ear as opposed to an analysis. I am curious how a combination of these methods might impact the choice of voice combinations and pattern arrangements when performing “In C.”
With sound and acoustics there are so many variables because sound is a measurable phenomenon as well as a perceptual phenomenon. So, if a tree falls in the woods, molecules of air will be excited (the measurable phenomenon) but then a reciever needs to percieve the excitation and turn it into what we call “sound” (the perceptive phenomenon.) The answer to that age old tree falling question actually depends on how you define sound. I lean toward the “someone needs to be there to hear it” answer because of the way the ear and the brain create our experience of sound.
I am amazed at the process of human hearing. How the outer ear is round and curved to catch the excited air molecules that ARE sound, and filter them so that we can sense where the sound is coming from. (That little triangle of cartilage that sticks up in front of the opening to the ear functions as a sound reflector.) How these molecules are resonated down a perfectly formed canal that vibrates the eardrum at specific frequencies so we can hear pitches. Then this Rube Goldberg contraption made up of three small bones translates the vibrations of the eardrum into vibrations that are picked up in the fluid basilar membrane curled inside the cochlea. This membrane allows for greater distinctions of loud/soft and more pitch information. And from here, the vibrations are translated into electrical impulses that are carried to the brain where we say, “I hear you.” What happens in the brain is a mystery, but many explorers are investigating this magical process. This where we get into the arena of psychoacoustics.
In Daniel Levitin’s book, This is Your Brain on Music, he describes the perception of sound as “psychological phenomenon.” Levitin, a neuroscientist, musician and sound engineer, asserts that the qualities that define sound, such as pitch and timbre, are all in our heads. The reason he can say this with such assuredness is that neuroscientsts have identified several tonotopic maps within the pathway from the outer ear to the brain. The basilar membrane contains hair cells that fire only in response to a specific frequency. These hair cells (called stereocilia) are spread out over the membrane from low to high much like a piano keyboard. The auditory cortex has a similar tonotopic map spread across the cortical surface. Even the brain itself is a tonotopic map. According to Levitin, pitch is so important that “the brain represents it directly; unlike most any other musical attribute, we could place electrodes in the brain and be able to determine what pitches were being played to the person just by looking at the brain activity.” In other words, playing a pure tone at 440 hz will fire neurons in the brain at that exact same frequency!
I find this information exhilarating and daunting and, for the moment, I feel the need to focus, so I am going to start by exploring measurable acoustic phenomenon. A few years ago, Trudie had a sun room added to the back of our house. One of the major reasons for this addition was to house my music buddies who would come over to play and improvise. (She is so lovingly supportive of my passions!) As it turned out, due to the small size of the room and the multiple glass windows and ceramic tile floor, it is ALIVE acoustically speaking. Recordings come out really great, but it is hard to play live in the room because it is a wash of sound. It occured to me that this room could be my laboratory to explore acoustics. I can measure the audible spectrum of the room as a baseline and then formulate questions to explore as I go.
First, to analyze the room. It is a small rectangle with an alcove. I am aware that the alcove will throw my calculations off a bit, but I am going to treat this as a 12′ x 13.5′ rectangle. There is an algebraic formula that gets at the resonant frequencies of a rectangular space. The factors in the equation are the room measurements, the room volume, the reverberation time and the lowest frequency range of the room. The lowest frequency range is determined by dividing the 1,130 ft ( the speed sound travels per second) by 2 x the length of the longest wall (13.5′). In the sun room, the lowest frequency range is 49 Hz. This is the cut-off frequency for the low end sounds in this room. The volume of a 12′ x 13.5′ x 8.25′ room is 1,336.5 cubic feet. The reverberation time will be my first experiment as this will take some time to complete.
*I started writing on Wednesday, February 12, when we had a winter storm that included 4″ of snow and some icy freezing rain.