“In C” as a Poly-rhythm Generator

The snow of last week is gone! I am rolling off a weekend full of beautiful images and sounds. From Archibald Motley’s vibrant, sensuous portraits to the sound of a roomful of clattering teacups and voices, singing, I can tell that Spring has Sprung! Hallelujah!!!

Vibrancy is the word this week. The “cy” on the end gives the word a shimmer to my ear, like brushes on a cymbal. Vibrancy comes from the Latin vibrare which means to shake to and fro – so to vibrate, oscillate, shimmer in the sun. With all this shaking going on, I had the thought, “I wonder if “In C” could work with only (or mostly) percussion instruments?” Let’s give it a try.

First, the voices-probably 5 or 6 with the optional 8th note pulse (on a cowbell, perhaps- hehheh). In order to keep melodic interest, woodwinds and vibraphone will come in on the phrases with sustained notes. In addition to the original two percussion instruments, I included some Indian drums, a beatbox, and an Irish drum rack that I put together. One thing I discovered right away with multiple percussion parts was that Patterns 1 and 2 can sound very off when accumulating (see “Who is Terry Riley, and What is ‘In C’?” blog post) due to the triplet feel they posture against the eighth note pulse. The reason for this is that too many emphasis points are coming down too randomly, especially if several of the voices have loud bass drum or toms. To offset this, I placed the most of the voices on the high frequency end, where sounds are light and, well, shimmery.

Oh, and there is a bass track that plays a crucial role in holding this ensemble together. Every time I brought the bass in on the second pattern, the whole piece felt really thrown out of whack. Don’t get me wrong, I tend to like “out of whack.” I call myself The Idiosyncratic Beats of DeJacusse because I like for music to move in and out of time. I like to have things feel “a little off” or “not right” and then slide into a clear and deeply moving pulse. This effect can jar the listener into their head and then lull them right back out again. I do this to myself all the time. So, I tried dropping the Pattern 2 Bass in at different times in hopes of finding a sweet spot for it, but it was “out of whack” in a less than satisfying way, so I deleted Pattern 2 from the bass line repertoire.

The recording you are about to hear is the first 8 patterns of “In C”. I have started thinking of “In C” in sections, the first of which runs from Pattern 1 through 8. This group of patterns feels like an introduction to me. In the recording, I started all the voices off together on Pattern 1 and allowed them to branch off from that solid base. The base only lasts about 5 seconds, then things sound a little chaotic for a half a minute or so, then it settles into a loose poly-rhythm. At about 1:30, you will hear what Trudie identified as the “This Old Man” theme, which is present in Patterns 2 and 3.. All the voices end on Patterns 7 and 8.

I invite you to listen:

Pattern 7 is like no other pattern in “In C” – the count in to the first notes played is 7 beats of silence then 3 quick repetitions of middle C then 9 beats of silence to finish the pattern. So, when played as a loop, there are 16 beats between sounding the instrument. This pattern along with the two sustained tones of Pattern 8 are the transition to what I consider the second section of “In C” which is Patterns 7 through 20. This recording begins with the voices layering in on Pattern 7 with the woodwind on the Pattern 8 long tones and ends with all the voices on two unison ta-das of Pattern 7 at the end.

I call this piece: “I dare you to listen to the whole thing.”

It is clear from these two recordings that “In C” contains a host of poly-rhythmic potentials. I will continue working with this percussion choir, and I will see what other unique combinations of voices this piece might lend itself to.

The Sun (Ra) Room Experiments

So it’s snowing in Durham, and snow’s what we’ve got.* And power, too! And as it happens, we have exciting ideas regarding a deep curiosity about sound vibrations. One of the ideas I am interested in exploring is sculpting sound in relation to the acoustics of a room. Sound engineers use acoustics when producing live or recorded sound to make sure that each musical voice in the room is heard, that the voices don’t get in each others way, and that each voice is in the optimal sonic relationship to the other voices. I don’t know how deeply sound engineers get into the actual mathematical analysis of the resonance of each room they set up in. It seems that most of the work is done by ear as opposed to an analysis. I am curious how a combination of these methods might impact the choice of voice combinations and pattern arrangements when performing “In C.”

With sound and acoustics there are so many variables because sound is a measurable phenomenon as well as a perceptual phenomenon. So, if a tree falls in the woods, molecules of air will be excited (the measurable phenomenon) but then a reciever needs to percieve the excitation and turn it into what we call “sound” (the perceptive phenomenon.) The answer to that age old tree falling question actually depends on how you define sound. I lean toward the “someone needs to be there to hear it” answer because of the way the ear and the brain create our experience of sound.


I am amazed at the process of human hearing. How the outer ear is round and curved to catch the excited air molecules that ARE sound, and filter them so that we can sense where the sound is coming from. (That little triangle of cartilage that sticks up in front of the opening to the ear functions as a sound reflector.) How these molecules are resonated down a perfectly formed canal that vibrates the eardrum at specific frequencies so we can hear pitches. Then this Rube Goldberg contraption made up of three small bones translates the vibrations of the eardrum into vibrations that are picked up in the fluid basilar membrane curled inside the cochlea. This membrane allows for greater distinctions of loud/soft and more pitch information. And from here, the vibrations are translated into electrical impulses that are carried to the brain where we say, “I hear you.” What happens in the brain is a mystery, but many explorers are investigating this magical process. This where we get into the arena of psychoacoustics.

In Daniel Levitin’s book, This is Your Brain on Music, he describes the perception of sound as “psychological phenomenon.” Levitin, a neuroscientist, musician and sound engineer, asserts that the qualities that define sound, such as pitch and timbre, are all in our heads. The reason he can say this with such assuredness is that neuroscientsts have identified several tonotopic maps within the pathway from the outer ear to the brain. The basilar membrane contains hair cells that fire only in response to a specific frequency. These hair cells (called stereocilia) are spread out over the membrane from low to high much like a piano keyboard. The auditory cortex has a similar tonotopic map spread across the cortical surface. Even the brain itself is a tonotopic map. According to Levitin, pitch is so important that “the brain represents it directly; unlike most any other musical attribute, we could place electrodes in the brain and be able to determine what pitches were being played to the person just by looking at the brain activity.” In other words, playing a pure tone at 440 hz will fire neurons in the brain at that exact same frequency!

I find this information exhilarating and daunting and, for the moment, I feel the need to focus, so I am going to start by exploring measurable acoustic phenomenon. A few years ago, Trudie had a sun room added to the back of our house. One of the major reasons for this addition was to house my music buddies who would come over to play and improvise. (She is so lovingly supportive of my passions!) As it turned out, due to the small size of the room and the multiple glass windows and ceramic tile floor, it is ALIVE acoustically speaking. Recordings come out really great, but it is hard to play live in the room because it is a wash of sound. It occured to me that this room could be my laboratory to explore acoustics. I can measure the audible spectrum of the room as a baseline and then formulate questions to explore as I go.

First, to analyze the room. It is a small rectangle with an alcove. I am aware that the alcove will throw my calculations off a bit, but I am going to treat this as a 12′ x 13.5′ rectangle. There is an algebraic formula that gets at the resonant frequencies of a rectangular space. The factors in the equation are the room measurements, the room volume, the reverberation time and the lowest frequency range of the room. The lowest frequency range is determined by dividing the 1,130 ft ( the speed sound travels per second) by 2 x the length of the longest wall (13.5′). In the sun room, the lowest frequency range is 49 Hz. This is the cut-off frequency for the low end sounds in this room. The volume of a 12′ x 13.5′ x 8.25′ room is 1,336.5 cubic feet. The reverberation time will be my first experiment as this will take some time to complete.

*I started writing on Wednesday, February 12, when we had a winter storm that included 4″ of snow and some icy freezing rain.

Destiny’s Door-The Dream-Leaping the Threshold

So I have been reworking Leaping the Threshold by placing the trumpet voice more forward in the mix and the female voice more distant. At this moment I am listening to both and trying to decide which supports my concept better. The female voice in the forefront sounds more triumphant to my ear. When the voice is in the background with the trumpet forward, it sounds a bit mournful, like we didn’t quite make it. This makes the decision clear: voice forward!

Leaping the Threshold uses all stems from the original tune and resamples them in relation to each other. I am working on a second piece called The Dream which would precede Leaping the Threshold. This piece is built around a synth line played in D Dorian. I love the Dorian mode. It is so full of longing. Whenever I hear it, my heart feels “yearny” (like in Sentimental Journey.) Perhaps it is a bit sentimental, too. Whatever, it stirs up heart feelings which I find uplifting and healing. The Dorian mode works with the stems from the original tune which I believe is in D Major. If you go to the piano, and play a D scale -DEF#GABC#D- then a D Dorian – DEFGABCD- you can hear that the two modulate together in a pleasing way. The feeling flows very naturally from the D Major to the Dorian in a melancholy way, almost like a shoulder shrug.

So the Dorian synth line driftily loops within a framework of the male choir stem and the gritty bass stem from the original tune. The gritty bass stem has some excellent whispy, skittish artifacts that scurry back and forth percussively in the upper part of the sonic frame while the bass growls low. It is soo cool, I love how it uses the sonic space. The male choir serves as an enhancer of the synth line, swelling in some places and receding in others. The bells approach lightly throughout the first section and then begin to toll. The choir fades and the bass line comes in at a slightly faster tempo driving the second movement. The last movement adds a low bass ostinato and another slight uptick in tempo. The last movement has a sense of adrenalin rush to it.

I was thrilled to discover that I can tweak the tempo of sections of the tune once I have recorded it in Ableton. I can see the starting tempo down to the hundredth of a Beat Per Minute (BPM), and I can raise the tempo as much or as little as I want. I like to apply the PHI – Golden Mean as a template when I have a relational movement to make when I am sculpting the sound. I don’t know if it matters, but this measurement is accessible and usually produces the effect I am listening for. The accelerations are barely noticeable, but give a feeling of the excitment and fear as one takes a leap of faith.

The bass rumblings under the third part of The Dream are low piano/strings that are very full and throaty. I tried several different rhythmic emphases for the piano/strings ostinato – one slower more methodical clip sounded a bit too much like “Rite of Spring” -Jaws Edition. The one you hear was played slower, then double-timed and with a beat repeat thrown in. While it has a pulse feeling, it is not a driving downbeat. I hear the piano/strings as a racing heartbeat underlying the movement patterns of the bass, synth and percussion. The piano/strings are moving in a triplet feel under the syncopated percussion artifacts in the bass with the synth floating along side. Again I am feeling the influence of “In C” as I shape a pulse driven piece where the individual voices cascade over each other.

This is why I feel that what I am doing is painting with sound. If a musician or sound engineer listens to this piece, while I think they could find elements to appreciate, they would likely find it unsatisfying. And they would be right; what I am creating probably doesn’t fit their aesthetic template. I listen with big ears, listen deeply, and engage with the interplay of feeling and tension. This interplay is not always a tight 4/4 or an expected progression. This interplay doesn’t always have a hook. I hear the interplay as conversational, even monologic at times. Then there are all the dynamics and direction that carry the piece forward: the internal narrative based an a progression of feeling. Everything has that in it. Everything. I believe that everything is made of molecules and stories.

So the internal narrative of Destiny’s Door-The Dream-Leaping the Threshold begins in a haze and unfolds like a dawning, and when the bell tolls it is time to roll. Excitement and adrenalin fuel the last segment until a swelling choir of voices gives birth to the song of wide open. Give a listen:

The contest rules limit the length of an entry to 90 to 120 seconds. The whole DD-TD-LT piece is 130 seconds. Oh, well, to be honest I am not sure if the move from D Dorian to DMaj works very well. I liked the sound of DMaj to D Dorian, but the reverse sounds a bit forced.

So I entered the two sections as seperate entries.

I learned a lot from this process and I plan to keep the stems and play around with them. I have not found a good tutorial or a good method for warping non-percussive melodic stems like strings, voice, flutes( -oh, that reminds me-I incorrectly identified one of the stems as flutes but it is very high woodwinds) in Ableton. One tutorial told me to turn off the auto-warp preference. So I did that. As I work with the stems getting them lined up with the 1 and moving beat points in sync, I find the clip BPM keeps changing. And sometimes I pull the audio all out of whack and it gets distorted. This makes the whole warping process seem like a lot of guess work. I am exposing the depth of my ignorance here. Oh, well.

The first public attunement of “In C” will be happening very soon. So please, stay tuned!

Hans Zimmer, or The Silver Carrot

I am distracted from my exploration of “In C” by a very exciting remix contest offered through Soundcloud. The prolific film composer, Hans Zimmer, has recorded a tune called “Destiny’s Door” and he wants me



to use at least one stem from the original work and create my own tune. (Stems are the individual tracks that are mixed together to make the recorded song.) Three remix composers will be chosen for an interview for a job with Zimmer’s music studio, Bleeding Fingers Custom Music Shop. Wow! Whoa! Hmmmmm…. Dream job or Nightmare? For starters, the studio name suggests a work ethic much more hard-core than my own. Plus, I love living in Durham. I am getting ahead of myself.

So lets take a look back.

This is my third remix contest, sort of. The first one was Erin Barra’s “Good Man” remix. While the subject matter was a bit uninspiring, Erin’s plaintive voice and boisterous back-up singers lent themselves to a tribal lament. I missed the entry deadline. I wasn’t thrilled with the final product. The tune bogged down in the middle.And those background singers speaking in tongues should have come in waaaay sooner. So, here for the first time is “Straight Grrls Lament:”

Then Kenna offered up his song “Love is Still Alive” for a remix. I enjoyed working with the concept of being out of one’s mind in love for “Alliwanishu:”

And now this epic, cinematic, fantastic opportunity has presented itself. I know that at some point in the future, I will be creating sound scores for films and video. This remix opportunity is a dream come true for me. Just to get a chance to work with the stems created by Hans Zimmer is unbelievably exciting. So lets get to it!

First have a listen to Hans Zimmer’s “Destiny’s Door”

It is beautiful-particularly the strings and the rich wall of sound for the end swell. What I ended up creating has the same harmonic feel because I used the same stems Zimmer did EXCEPT for the brass. I did not like the brass stem and felt that it shattered some of the vibrancy of the end swell in the Master mix.

Zimmer’s “Destiny’s Door” is made up of nine stems. There are the string, percussion and brass track, which figure prominently in the original.There is solo female voice, a male choir, flute, bells, a trumpet and a gritty bass. Once the tracks are downloaded to my computer, I listen to them carefully. I listen for the story each track is telling, for the movement and feeling evoked in the track. In this case, the title “Destiny’s Door” provides much inspiration. I was immediately drawn to the percussion, the solo female voice, the trumpet and, of course, the strings.

The female voice and the trumpet are playing the same theme which is one of the primary themes of the piece. The theme is a pentatonic scale ABC#DE laid out in octave and half step/whole step intervals. There is something about that tonal move up an octave and then a half step/whole step that conveys a feeling of having gone the extra mile. It feels like throwing the javelin or jumping long-the octave is the long sprint and the half step is the jump release. The feeling is of moving through Destiny’s Door, so the title of this theme I am developing is Leaping the Threshold. I like getting a title early on in the process cause it gives me a template for making choices.

Even though the solo voice and the trumpet occupy some of the same frequency territory, they have a nice blend. Putting these two voices together will challenge me to improve in quantizing and warping audio. In Ableton Live, I can import audio clips and Ableton will identify transients ( the loudest moments in an audio recording) and synchronize the clip to the overall session tempo. This method is effective because the loudest moments usually happen on a beat. Ableton is really good at syncing percussion clips but somewhat challenged by more legato voices. For example, the solo female voice track had some vibrato which was picked up by the warp engine as transients. When there are a lot of identified transients the audio has little glitches in it. Since part of what I am going for here is a strong, pure tone, these glitches must be dealt with. So it seems to be a process of weeding through the transients, setting the warp markers correctly, then placing the audio so that it syncs with the other parts.

After quite a bit of trial and error time, I found an over layering of the tracks that I liked. The voice and trumpet come in ahead and behind each other in what is an echo at times and sometimes a call and response. I came up with two samples from the percussion stem that work as a kind of galloping beat. The much faster strings also drive this galloping feeling. While the harmonic feel is similar to the original end swell- I invite you to listen to and feel the very different pace, energy and space in the sonic field.

Working with “In C’ these last few months has really opened my ear to all the harmonic and rhythmic possibilities that can happen when you change the relationship of musical phrases to each other. You can take the same few notes and make wildly different songs from them.. I am more and more interested in making little phrases and setting them in conversation with each other.