Lessons and Carols

So the year comes to a close and I look back on many lessons learned and much mulch for the sound garden in my mind. This morning, I woke early and went to the project folder in Ableton. Looking around at dozens of unfinished pieces and parts, I felt this deep satisfaction and supreme excitement at all these ideas that Ableton allows me to capture. Most everything I want to hear in my soundscapes I can sculpt out of Ableton. Working primarily in Ableton puts a kind of mark on my sound so that some people might recognize certain instruments or synths or pads as being from Ableton. When people would say that to me, it kinda felt like this is something I should try to “fix”. Then I realized that Ableton Live is more than just software, it is the medium in which I work. So it is fine to recognize the medium in which I create sound. It would be like saying “I see you use watercolors.” or “Sounds like you are playing a guitar.” So Ableton is the arena from which I sound my world.

Throughout the year, it was hard not to notice that Ableton and “In C” are a really fabulous couple! It is like they were made for each other. Ableton’s clips and scenes perfectly accomodate the patterns of “In C” in a variety of voicings. Even if you don’t listen all the way through, I urge you to go back and just listen for 30 seconds to some of the samplings of this partnership. If nothing else came from this year, my collaboration with these two is fertile ground for future growth. I know I am not finished with “In C” as a sound text for further exploration.

Spending so much time with this piece has helped me develop compositional frameworks and identify further questions for sound exploration. “In C” forced me into a daily practice of listening deeply into it’s musical layers of sound. What an amazing experience it has been! There is so much going on in the harmonics of this piece. One of the most interesting phenomenon in musical perception is the absolute presence of the fundamental tone! If you play all the harmonics, but NOT the fundamental, the human brain will “hear” the fundamental tone. This fact of our existence makes me weep with joy. AND it takes me where I want to go as a sound sculptor – into harmonics and healing. This, coupled with an interest in the Law of Octave (an obvious force of nature to be tapped into), will be leading me as I practice in the coming year. And, don’t forget Accelerated Harmonics, my made-up concept for bumping or swelling harmonics over fundamental.

Another interesting thought from the year is that, with Ableton as my medium, most every sound created comes from… well, non-sound. Every sound is based on the creation and manipulation of sine waves, not the disturbance of a physical medium we associate with sound production. In my opinion, sine waves seem to have been born to become binary code with their elegant compression/rarefaction oscillating form. Sine waves are like the molecules of digital sound. (I always say that Ableton allows me to manipulate the molecules of music.) So sound from a non-sound source is one of the challenges of reading about audio production. The assumption is that audio production is about recording acoustic sound into digital format. A great many important considerations (types and placement of microphones, latency) are not issues for creating sound from a digital format. This is where I am stuck at rhe moment. I am not really sure if there are significant differences between these two sound sources when it comes to using effects, mixing and mastering. It seems like there should be. I think I hear a difference. The digital sounds brighter and higher in a rather full way to me. The lows seem to be squashed. I know I favor higher frequencies, and have great respect for the power of the lower frequencies. Any way, my questions are:

/how does the sound of recording an acoustic instrument through a microphone into a track in Ableton differ from the sound of a midi-instrument “recording” in a track? The way to discern the difference is through listening (headphones, monitors, stereos), through spectrum analysis both in live space and in the medium, and through further understanding of sampling and sound creation in the digital realm.
/in what ways do these differences impact the mixing and mastering process between these two sound sources?
Answers to these questions and more to be discovered in the coming year.

My year with “In C” taught me to let go of expectations and to allow ‘what is’ to happen. I am disappointed that I was not able to organize the all night version of “In C.” As the Fall approached with its tremendous heart-breakingladdening, I was not as caught up in the piece as I was at the beginning of the year. The energy to organize a community event was not there. Some day, something like this will happen. I def need the help of others to pull it off.

The music and soundpainting I create from now on will be highly influenced by what I have heard “In C”. The layering of voices, the overlapping of frequencies, the relationship between frequency, amplitude and accelerated harmonics, the power of ostinato, the power of long tones, the tidal push and pull of rhythm, the edges of the spectral field that can be tonally considered in a given “key”—all of this and so much more have been my gifts from this amazing year. Thanks to Terry Riley, Susanne Romey, Xopher Thurston, Chris Eubanks, and everyone who listened to me, asked questions, and shared this experience with me. Your loving attention means so much to me. I hope you will continue to read about my work as I move to a new WordPress blog. There will be one last post here for this year. Thanks again for witnessing!

Mixing it up

With all of our ADF classes completed for the Fall, attention can be focused in the studio. There are always abundant projects to be developed and finished. Finishing is getting a recording of a tune or soundscape that represents the piece as a “hard copy.” Since most of my Ableton Projects are works in progress with space available for others to chime in, it is possible that there will be multiple and very different versions over the lifetime of a piece. As with “In C”, the parts (clips and some animation) will be the same with each hearing, but how they weave together to create a whole and the fullness of that whole is subject to the Now and who else is in it. It is my hope that many of the soundings of my compostitions will be only in that moment in time, never to be heard again, while the core of the piece will always remain.

In order to get a hard copy, I put the voices together in my favorite room to play – my head. I am playing in that space like I never have before. Paying attention to which voice is where, how much space the voice takes up, and how it fits in or stands apart from the other voices. All of these considerations are to further the storyline of the piece of music. And listening through headphones is one experience of it, while listening through monitors is another. As I create the mix for headphones, the position and movement of the voices is a big priority. For example, there is a processed shaker sound during Phrygia: Hera’s Saga that feels as if it moves right through my head thanks to the panning effect on it. When this sound is played through monitors, there is a feeling of it moving up and out through the room, so the direction and distance the sound travels comes across quite different to me in each of these diffusion settings. I want to experiment with different ways of mixing with different priorities for these two modes of experiencing.

The mix for Phrygia: Hera’s Saga has gone through numerous transformations. I have a mix of the first two movements Waken and Move that I am very happy with. The voices blend when I want them to blend and stand apart when I want them to stand apart. The sound is full and the voices dance around in the mix, taking turns being up front. The last three movements The Chase, Catch the Shadow and Kundalini Joy have been more difficult to mold. I have a good recording and have spent hours sculpting the mix. While engaged in this process, I am consulting Bob Katz Mastering Audio and Mixerman’s Zen and the Art of Mixing. Both these guys have alot of mixing experience and they have very different approaches with lots of good info.

One of the techniques I was working with in The Chase was an abundance of reverb tail on two instruments, which I liked playing around with to obscure the attack on the fundamental tone. This type of sound is often refered to as “muddy” in the mixing world. I find it rather magical to have the entire soundscape awash in reverberant harmonic tones. As with most magical things, this needs to be used wisely and not excessively. I am bordering on excessive in this piece partly because I am using the reverb tails as a background wash for the main themes, AND the lead instruments are providing both the main theme and the harmonic wash. A plucked samisen and vibes are the lead instruments and they mirror at times and interact at times. This morning I used some EQ techniques suggested by Bob Katz. I used an EQ 8 on the strings because they are providing most of the background wash. I ended up using a spectrum to identify the main fundamental tones in my high end material (bells, tamborine, shaker) and then dipped these tones out of the plucked samisen. This seems to have worked in that I still have plenty of reverb wash, but it isn’t constantly overwhelming the spectrum. The high end parts were accelerating some harmonics in the main instruments, now they have there own space and the harmonics are backgrounded more.

I accidently discovered a commonly used mixing technique that mixes two different Ableton renderings of the same track, with slightly different animation, together in Audacity. What had sounded weak and tepid now has presence mixed this way. This is called “double tracking” and is a common practice when mixing tracks of vocals or guitar on band mixes. Part of the fun of my work is that I have alot of resources about mixing and mastering audio, and I have to figure out how to apply these concepts within the virtual realm in which I work.

Mixing Phrygia:Hera’s Saga down to a thirty minute hard copy took many weeks and required many breaks to rest my ears. This piece has a bright sound especially initially and, while I like this sound, I am aware that it can wear ears out especially through headphones. When I listened to the entire 29 minute piece, I hear a frequency movement that begins low mid range, then moves high and then ends with a growling, rumbling bass taking a main theme at the end. While I have a full recording that mixes the whole thing as one piece, I ended up putting the piece into two tracks on my Bandcamp site. I have this as an album, and I may add some other tracks I have been working on over the course of this year. The main thing is that this is for Sarah Sage and all that she gave to me. I am so thrilled that she has emerged from her medical trial by fire with so much strength. I am not surprised as I know very intimately the healing capacity of the great love she carries. My constant prayer is that she will allow herself that healing and not just look to her tribe and their experts for how to proceed on her path. This prayer is sent forth in the music that comes from remembrance.

Phrygia (Hera’s Saga): A new soundscape by the idiosyncratic beats of dejacusse

My dream is to co-create musical soundscapes for dance, theatre, yoga classes and art galleries. I am living this dream as I speak it. Since retiring, I have had the opportunity to create soundscapes for dance and art galleries. My next art gallery soundscape will be performed on August 15th at The Makery in conjunction with photographer Allie Mullin’s show Svadhyaya: Discovering Self Through Asana. I feel very connected to this idea as I have experienced shifts in my physical/emotional/spiritual body from doing yoga asanas.

I began the soundscape as I usually do by ear searching through the Ableton library for some basic sounds for the current project. Percussion and plucked strings came to the forefront, and I began laying down ideas. Several ambient synths made their way in to fill out the opening sonic pallette. Then tempo became a powerful consideration. I began with a languid, trance-like rhythm, perfect for the grounded still place from which asanas are approached. Now there was a need to energetically engage. The beginning tempo was 120 bpm, so I played around with increasing tempos and layering in more parts. For the grooves, I focused on a broad drum kit that contained pretty much every percussion hit one could ask for from samba whistles to four different floor toms to cymbals of various diameters and tonal qualities. Then I added a drum rack that was as small as the first one was large, containing maracas, cymbals, tamborines and agogo bells. These two racks allowed me to work out some lovely groove varieties that can be pulled in at whatever tempo at any given moment.

I got stuck mid-week- caught up in melodic figures feeling too facile, not enough depth for my ear. I am working in E Phrygian mode which makes E the tonic of the primary scale for the piece. In terms of chakra tones the E is related to the heart chakra, which feels very fitting given the theme of Allie’s exhibit. While E Phrygian is a natural minor mode, it can be shifted to a dominant mode by raising the third degree of the scale. So I played around with that for a while. Ableton allows me to play parts into a clip using a midi keyboard or I can insert a clip and draw in the notes where I want them. I can move notes around, change the grid to accommodate note lengths up to 1/32nd. I can adjust rhythmic relationships and even build in a “live” feel by adjusting quantize settings to less than 100%. I once told a friend that Ableton allows me to manipulate the molecules of music!

20140812-201659.jpg

Here is a screenshot of my Ableton template so far. The columns on the left are tracks that contain clips. Each track houses a particular instrument voice. Each clip is a phrase that can loop or play once or repeat two, three, however many times I choose. I can set the loop to play for a certain number of measures and then trigger a new behavior. The column on the right is the Master fader and trigger for each scene. The lines across are referred to as “scenes” which are full of melodic/rhythmic statements. The entire piece is divided into 5 sections that get increasingly faster with more complex layers. Sorry the picture isn’t clearer, but it gives you an idea of what I am talking about with using Ableton.

“In C” is influencing my approach to the work as I develop patterns that can be played in unison, or overlapped in counterpoint and still have sonic integrity. This is where things get fun. The melodic instruments I am using are a plucked samisen (a three-stringed Japanese musical instrument), a bass, something called New Age Strings, and, of course, vibes. I LOVE the sound of vibes and I doubt I will ever create a piece without them. I frequently end up crafting a long, conversational melodic line with them; no hook, just a stream of conciousness flow of intervals. I will someday challenge myself to solo for as many measures as I can. For now, the final scene, at 300 bpm, will be the space for the vibe conversation. It will be my Pattern 35.

I am spending this second week of work finding the organizational flow for performing the soundscape. How will I move from one scene to the next? How do the clips overlap rhythmically and sonically as the tempo rises? Today I color coded clips by scene and instrument type. I named some of the rhythmic clips so I would have an idea of the feel of each one. Some of the big drum kit grooves may need some tweaking. I am thinking about moving forward and then backward through the scenes. I want to add in some acoustic sounds like vocalized Sanskrit words and some rattles and bells.

This afternoon I played the piece forward through four tempo changes and then back three. I am really happy with the way the clips all hang together through all the tempo changes. I have some momentary off the beat grooves on high bells that really give a kick at the right moments. The piece ran 37 minutes- I was laughing with Trudie that my soundscapes always seem to come in at about a half hour- the length of my attention span! (Not bad) Anyway, I listened to the whole thing again and got this idea to take a half a dozen hand percussion instruments and invite the folks at the party to “talk” to the soundscape. Anyone who wants can carry one around and just talk back when they notice something in the sound as it unfolds. I think this would be cool.

Here is a sampling of the opening as it is at the moment:

So now I have a satisfactory backup recording to load onto the Ipad- I always like to be ready in case my main computer malfunctions. (Jody Cassell has ingrained in me the need for having backups. It is a smart practice.) And I am feeling very good about this piece being able to extend over a long period of time. The first section ran 8 minutes and it could easily go 20 maybe 30 minutes. The fastest section is short and then I start moving backward through the piece bringing the tempo down. I discovered (for myself; you probably knew this already) that raising the tempo abruptly works most of the time, but lowering it abruptly, not so much. So I will map the tempo adjustment to a knob on my interface so that I can turn it down slowly. This will also allow for a lengthening of the piece.

The name came to me as I sat drinking a spicy tea which warmed me into a lucid dream state. “Hera’s Saga” is an anagram of a special name for someone with whom I have a deep heart connection. Plus Hera was the Goddess of Marriage (particularly fitting in this case) and the reigning female deity of Mount Olympus, the home of the Greek Gods and Goddesses. Sagas are, of course, stories. “Phrygia” refers to the E Phrygian modality the piece is rooted in. I was looking for a Sanskrit name, but this one seems right and good to me. Reminds me of younger days when I thought I had finally found my religion in Wiccan/Goddess Spirituality. So powerful to move from a lifetime of God as old white guy to the vast, suppressed history of female deities.

Isis, Astarte, Diane, Hecate, Demeter, Kali, Inanna. One of my first chants.

I digress. If you live anywhere near Durham, NC and are up for seeing some wonderful photos and hearing some awesome grooves, please do come!

20140812-210416.jpg

Introducing the Orchestral Sextet

In the quest for interesting and varied voicings for “In C” within the Ableton Library of midi instruments, I offer for your consideration- The Orchestral Sextet. Four string voices ranging from a synth viola to a full string ensemble pitzing, spiccato-ing and staccato-ing – each available as a separate voicing in the ensemble. The synth viola provides a lush and laggy underpinning for the stabs and plucks of the string ensemble voices. Then, layered over this are the woodwinds in staccato and full voiced modes. The staccato voice is softer, slightly breathy while the full ensemble produces long, high, rich tones and a lovely midrange. For this post, the Orchestral Sextet will perform a slice of “In C” that runs from Pattern 21 to 26. This is a slice that I love because of the dotted quarter triplets swaying together and in counterpoint to each other.

20140518-101419.jpg

(Here you can see two of the patterns-22 & 25- as example)

Patterns 22 – 26 are dotted quarter note steps from E to B with the raised F that was introduced in Pattern 14. The raised F “In C” creates a tritone harmonic. When the F# is sounded for the first time in Pattern 14, an ominous tension emerges in the piece. It is particularly unsettling in contrast to the sweet C dyads and triads that begin the piece. By the time we get to Patterns 22 – 26, the tonic C has been dropped for a while, so the F# is heard in the harmonic context of an E Dorian modal movement (whole-half-whole-whole.) Now it sounds sweet, if a bit melancholy.

In addition to the tonal content and the waltzy feel, these five patterns vary in length from 19 pulses to 25 pulses. So the lag accumulation effect is compounded here with multiple iterations of each pattern in lag and each pattern in lag with the others patterns. The softer attacks of some string techniques cause the beat to feel laggier still, creating even more density. One of the voices, the synth-viola, has that “behind the beat” feel. For this rendition, I decided to put that instrument on Pattern 21, which is a 6 pulse sustained F#. This is what swells up ond overpowers at the very end of the recording.

A little over half way through the recording you will hear that sustained F# tone fall way back in the mix. You have to REALLY listen for it, but it is there. That quiet drone of the F# tone creates a rich backwash of sound in which the other voices play. I hear it as a kind of “chorus effect” that impacts the overall sound. Buzzing that tonal center low and quiet in the background seems to amplify and integrate the overall sound of the mix. This is more evident through headphones.

Also of note (tingtingtingting) – no pulse! Since the midi instruments can play together very precisely, I want to exploit the opportunity to ditch the pulse when it feels cumbersome. Hearing those dotted eighth note phrases sustained in full with no pulse is a beautiful sound. That section makes my heart waltz. Good name for it – Heart Waltz.

Here is another, quite different framing of Patterns 22 – 26 and beyond, using this same ensemble of voices. I was experimentng with combinations of patterns with syncopated rhythms. This songset begins with Patterns 12 and 18 with the long tone of 19 thrown in for drone effect. In this version, the laggy viola got to play the dotted quarter patterns, so you can feel the drag effect it has on the tempo flow. I love it! It locks into a groove with soft edges. So this recording is longer but equally as interesting and beautiful as the one before. Here we move beyond Patterns 22 – 26 and through a bit of the “rogue” Pattern 35 and land finally on another pair of patterns that I love together – 44 and 45. These two patterns are 6 pulse phrases and thus have the waltzy feel again.

Be prepared for some longer repetitive sections. If you feel agitated with the repetition, breathe and listen more deeply or more gently, lightly. Then when you can do both at once-voila! – a new layer of clarity.

 

The Pulse: Is it Necessary?

As I listened to the recording from the April 15th “In C” playshop at Motorco, the high C pulse that plays throughout became an unpleasant interference. The tone seemed to create an aural haze through which I had to p-ear to hear the underlying song of the patterns. Granted I played the pulse too loudly in places, even so, the idea of ditching the pulse altogether is now up for consideration.

Most every recording of “In C” starts with that high shiny eighth note pulse. But this sound was not part of the original composition, nor is it included as a pattern in the score. The story of how the pulse came to be starts with the origins of “In C” itself. I mentioned in an earlier post that Terry Riley was using tape recorded loops to make collages of sound. He found the technique when a French sound engineer hooked two tape recorders together. As one tape recorder plays a recorded tune, the other tape recorder records the tune. The tape is stretched across the heads of both recorders so that the newly recorded tape is fed back into the original playing recorder. The result is an accumulation of the original tune in different phase relationships to itself. Riley called this technique the “time-lag accumulator.” He used the technique in performance for years, which made him an early pioneer of the sampling and looping used by electronic musicians today. Because his ear brain is so curious, Terry started composing a piece that would create the same type of phase relationships in real time with an instrumental ensemble. Then “In C” got on the bus with him. When the musicians gathered to rehearse the patterns of “In C” in a time lagged manner, each keeping there own pace, it didn’t quite work out.

From Robert Carl’s Terry Riley’s ‘In C’:

“Pauline Oliveros remembers that Riley assumed the work would be easy, but he quickly found out that it was more difficult than he imagined. The major stumbling block was rhythm; as soon as the divergence of modules began, it became difficult to maintain a common tempo or metric reference point, and the work fell apart. At this point, Reich made a suggestion:

Well, it was in rehearsal, and the piece moves along pretty quick. And he (Riley)…wants everybody together, and they’re playing whatever pattern their playing but they’re locked into the same eighth note. And that did not always work. There were often at least ten people playing, and the room was fairly reverberent, and so sometimes people were slipping and sliding around the eighth note unintentionally, as a mistake. So, once a drummer always a drummer, I said we kind of need a drummer here, but since drums would be inappropriate, what about use the piano, so Jeane played some high Cs just to keep us together, and Terry said “Lets give it a try” or something like that, and we tried it and ‘voila’ everyone was together.

And so the Pulse was born.”

The nature of the human hearing mechanism, the phasing of reverberent acoustics, and each individual musician’s placement in the space make playing “In C” accurately and consistently a daunting task. For live musicians the pulse IS necessary for keeping the group in “time-lag” together. Understandable! However, the ensembles in Ableton are not subject to the constraints of the human body in performance. Once the pattern has been notated in the clip slot, the midi instrument will play it exactly the same and exactly in time. I can build in a little swing or have them play more “loosely,” but there is no slippage in relation to the tempo. I feel that the steady underpinning of the Ableton ensembles could provide the necessary grounding that the acoustic musicians need. Fifty years later, with electronic voices playing along with acoustic musicians, might the pulse be redundant?

Robert Carl argues that because the pulse has been present from the very first performance and in most subsequent performances and recordings of “In C,” it has become an integral part of the text of the piece. As in other types of oral traditions, all of the “retellings” of “In C” over the past fifty years have sealed the place of the pulse. He calls the pulse “one of the most important defining features of the work.” He goes on to explain:

“…the pulse is a steady, unvarying eighth-note texture which provides a clear rhythmic anchor… It is thus a sort of neutral ‘grid’ backdrop against which…the modules may unfold.”

The Ableton instruments provide a similar rhythmic grid albeit not a neutral one, but I am still feeling the pulse could be replaced by the Ableton ensembles. Carl goes on:

“…because of its pitch, not only does it give the work its title, but it references every resultant harmonic combination, always including C. One cannot ignore the harmonic content of of the pulse, no matter how subliminal it may become.”

OK- As you can see, Robert Carl is an eloquent spokesperson for the pulse. It feels true that the pulse not only shapes the harmonic content of “In C”, but also is an important element in the oral tradition that comes from fifty years of playing and listening to it. The pulse is the beginning of most every recording and performance of “In C” we have ever heard. To the person who has heard the piece on numerous occasions, starting the performance with Pattern 1 would not sound like “In C” at all. All of this has given me pause… for the idea of eliminating the pulse completely. While I still plan to experiment with playing “In C” without the pulse, I will make the decision in each performance situation based on considerations of harmonics and on input from participating musicians as to the need for an additional rhythmic anchor.

Reference:

Terry Riley’s In C, Robert Carl, Oxford University Press, 2009

Meditating with Xopher

This week the Universe said to me, “Jude, you need to get your ears out of Ableton and out into the world.” So the User Profile Service service failed at log in and my computer is in the shop. Alright, then, no playing with “In C” for a while. On to other things I want to spend time with. I am so immersed in my own sculpting of this piece that I am in danger of losing perspective by getting too close in.

So, after dropping the computer off at Intrex, rehearsing with Jody Cassell for our ADF School Target Grant Program, and shampooing the carpets, I headed over to Durham Central Park to meditate with Xopher. Christopher “Xopher” Thurston has been a great inspiration to me both musically and spiritually since we met playing with the Triangle Soundpainting Orchestra. Xopher has shown me the basics of sound reinforcement, counseled me about playing live and was, thankfully, my sound man when I first played original tunes through a sound system at The Pinhook a few years ago. In addition to being a sound engineer, Xopher is an in demand bass man and a Buddhist teacher in the Dharma Punx tradition. He has been gathering a group of us together in the leaf shelter at Durham Central Park for meditation since summer 2013.

Tonight four of us sat under the starry sky with Jupiter and the moon shining brightly above. Xopher lead us through a body scan and then we settled in and opened up to the huge space we inhabit both within and without. I enjoy meditating outside because the environment is so distracting-just like life. It is such a great practice to observe the movement of awareness from breath to perception to story to waking and back to breath. Xopher rings a bell and gives appreciation for our time and attention. We stretch and move on into our respective evenings.

Xopher tells me he cued up “In C” following a local punk show and the rapid eighth note pulse that begins the recording turned a lot of heads in the bar. We talked about how “In C” moves and breathes like an organism. I would love to do an attunement at Motorco one Sunday afternoon. X thinks that would be possible. We talk about how he goes about tuning speakers in the venues where he works and how he has met some sound engineers who can listen to speakers and tell you which frequency to adjust on a parametric equalizer, just by ear.

Then Xopher told me how he came to play bass and that he played in symphonic orchestras in college. He attended a Land Grant school in Georgia, which meant the arts departments did a lot of community outreach. One night, the orchestra had a gig in Rome, GA, pop. approximately 30,000. The orchestra would play in the local armory, which -as it turned out -had been skillfully treated acoustically. As the orchestra played, they could hear perfect sevenths and ninths popping up in the room. These harmonics were not part of anyone’s score, they were being elicited in the room itself by the composition and voices. Xopher said he knew this was possible, but this was his only experience with this phenomenon.

This story reminded me of an experience I had last year that has been shaping my ideas about my sound practice. We were in Griffith Theatre at Duke where Alexander McCall Smith was accepting the Duke LEAF award. The theatre was packed and abuzz with people chatting excitedly. A man was on stage playing the kora (a 21 string harp-lute from West Africa) to honor McCall Smith and the light he has shone on Africa. Our seats were about 3/4 of the way up sort of in the middle. I sat and listened to the wash of human voices with the kora tones floating over them. The sounds merged together in my ear body and I started humming low and slow to myself. I reached a pitch that resonated more powerfully than the other pitches I had hummed to this point. Then moving beyond that pitch, the resonance dropped. So I went back to the resonant pitch and hummed it over and over to myself. It vibrated deep into my chest and I wondered what could happen if I had amplification. This tone seemed to be the resonant frequency of this room, these people talking, and the tones of the kora all meeting together.

Since I had this experience, I have played two soundscapes in rooms full of talking people. The first performance suffered from unusual room acoustics and poor speaker placement. The second one was more successful with improved speaker placement, a rectangular room and the addition of Steve Cowle’s sax and flute. I heard myself, and I heard from others who were present, that harmonics were singing in the room. Some heard chanting, some heard sweeping high tones. Now I am interested in orchestrating this type of aural experience with greater intention.

My plan is to couple room analysis with frequency spectrums to heighten the resonance amongst “In C”, the musicians, the listeners and the room being played.

This should be fun and a challenge!

“In C” and Ableton Live

In the performance instructions for “In C,” Terry Riley lays out a fluid foundation to guide the players. The directions read like suggestions and gentle admonitions: “The tempo is left to the discretion of the performers. Extremely fast is discouraged.” “It is important not to hurry from pattern to pattern…” “The ensemble can be aided by the means of an eighth note pulse played on the high C’s of a piano or mallet instrument.” In addition, Riley’s instructions allow for improvised percussion, amplification and electronic instruments. The tone of the text invites and encourages (me,hee hee) us to dive into the mix and try some things on!

This work is usually played by an ensemble of musicians live in an acoustic space. When he talks about “In C”, Riley emphasizes ensemble playing and the integrity of the ensemble. His instructions encourage freedom and deep listening as the means for creating ensemble. But what does ensemble mean when the voices are a group of digital instruments in Ableton Live?

An ensemble is made up of strong, distinct individual voices that join together in a common creation. When I listen to an ensemble, I want to hear each voice AND I want to hear the “voice” of the common creation. Unlike an orchestra or chorus, the ensemble isn’t working toward a blended single voice. Especially in a piece like “In C,” the choice of voice and timbre that brings in each new phrase will shape the melodic and rhythmic movement of the work in performance. Attention must be paid to each phrase and how the entrance of each voice affects the whole of the work.

With this in mind, I spent several months auditioning voicings in Ableton. Ableton Live is an amazing digital audio workstation that allows me to call upon any instrument/sound/synth as a voice in my ensemble. Ableton Live was developed by Ableton AG, a Berlin-based music software company founded in 1999, as a platform for creating, recording AND performing music using instruments, audio and midi effects. It has gone through 9 upgrades since its inception. I have been working with Ableton Live 8 for three years creating music and soundscapes for performance and installation. (To hear samples of my work, go to Soundcloud and look for DeJacusse.) For “In C,” I knew I wanted the voices to cover the sonic spectrum from 60 hz to 18 khz. (I will explain the reason for this in a moment.) Using the spectrum analyzer (one of the many audio effects tools in the Ableton toolbox), I assessed each voice for its presence on the sonic spectrum, and listened for a pleasing blend of timbre when all the voices played an individual phrase together.

Over several months, thirteen voices emerged as the current ensemble for the piece. Two percussion voices-one a more traditional drum kit and the other a world percussion kit-will emphasize the rhythmically interesting patterns. A grunge electric bass and an ABS electric bass cover the 70 hz to 130 hz range. The grunge bass has a buzzy sustain that adds an interesting texture in the low range. Some pizzicato strings, staccato strings and a ceramic plate EP round out the voices in the percussive pool with strong attacks and weaker sustains. For the longer tones. I chose woodwinds, a jazz organ, brass, mallets, ascension choir and harpsichord. The harpsichord has a high end buzzy finish that complements the grunge electric bass low end buzz. The spectrum analyzer indicates that these voices give full coverage of the sonic spectrum. And they sound pleasing to me as I play with the overlapping patterns. The voices may change in the future, but I am happy with what I have right now.

I am paying close attention to the sonic spectrum of the voices for several reasons. Since I plan to play this piece with other musicians this year I want to be able to back out the voices in Ableton that would sonically interfere with and muddy the contributions of the live instruments. In addition, I am studying acoustics and psychoacoustics in order to explore the rich sonority that will emerge when a variety of voices in a variety of acoustic spaces play this piece.

Here is a short sampling of the voices in Ableton that I have chosen thus far. In this recording you will hear each voice individually and then hear them layered together as they play pattern 17.