COMPEL Omeka Dev

Browse Items (868 total)

  • A string quartet amidst an ambisonic sea of sonified rat neurons.

    Pathways, bursting was written for the Spektral Quartet with support from a NewMusicUSA Project Grant and the Logan Center for the Arts at the University of Chicago.  It is part of a long-term, multi-work creative project that has grown out of my collaboration with Tahra Eissa, a neuroscience graduate student at the University. Tahra’s lab puts rat brains on tiny electrode arrays, stimulates them and studies their behavior, with the goal of better understanding epilepsy in humans. More information is available here.

    I took an interest in her research because I have epilepsy myself (thankfully it’s under control), and I’ve wanted to creatively engage with it for quite some time. All of the electronic sounds in pathways, bursting bear some relationship – from straightforward to complex – to the neuron data. The pulses of white noise, for instance, come from direct sonification, while fluttering sine tones come from using it to manipulate pitch. More complex procedures are used in the realm of ambisonic spatialization, where sounds vibrate erratically in 3D space. I’ve assembled these diverse sounds into textures that often become harrowingly dense, even when the electronics are not particularly loud. This certainly is part of my intention: after all, this project is about overloads of electrical activity in the brain. Portions of the electronic track are uncomfortably loud, overwhelming, and even violent. But part of my motivation for this project has always been to communicate aspects of my own experience with the condition, as it has been quite harrowing at certain points in my life. I’m also motivated to communicate this on behalf of others with the condition. So instead of mediating the experience of the electronics, I’ve set up the quartet as a lyrical foil, particularly in the latter portion of the piece.

    When the electronics reach their loudest, most explosive point, the quartet reenters following over 5 minutes of silence, struggling against the overwhelming electronics. The quartet continues to push back, in fits and starts, as the electronics subside. Their jagged, erratic polyrhythms slowly become more regular, and they eventually achieve a much more peaceful space, one that I think realistically counterbalances the violence of the electronics. But in this final passage, there’s a slightly brightened consonance that bolsters the quartet’s role as a relieving counterweight to the harrowing electronics, one that may even provide an affirmative message in the end – even as it resolves to the justly tuned odd partials of B flat (5/7/9/11/13).

    The Spektral Quartet premiered pathways, bursting at the University of Chicago on May 5, 2017, with a repeat performance on May 12 at Constellation Chicago.
  • A cappella choral work constructed in the computer from fragments.

    This piece is included on the Six Projects CD/LP released on the Innova Recordings label in 2015.

    Noopiming is a single movement a cappella choral piece.  The title of the work is also the text.  Noopiming is an Ojibwe word, which translates as “in the North, inland, in the woods”. All of the vocalizations in the piece are created using various elements of this single word.

    The piece has as it’s primary aesthetic underpinning, some of my own personal impressions of the Boundary Waters Canoe Area Wilderness.  I have been doing canoe trips in the BWCA my entire life and have often felt a sense of connection with the natural world there.  It’s a feeling of being connected to something ancient and primordial – something darkly beautiful that seems to draw me in, while at the same time, if I’m not mindful, could swallow me whole, leaving no trace.

    Noopiming was created using my fragment-based compositional process.  I started by recording a group of eight singers performing various musical gestures and textures.  The recording was done at the St. Paul Seminary Chapel.  This material was then edited down into a palette of hundreds of short audio recordings, which I then layered, combined and endlessly manipulated to create the finished work.  There was no actual score for the piece.  Instead, I created two lists of verbal instructions for the singers.  One was for inspecifically pitched material and the other for specifically pitched material, using only three chords, which could interlock in ways that I liked.

    Adding the visual component came after the music was completed.  I searched for a photographer who had a significant body of work focusing on the BWCA, and who’s work had the right aesthetic to match the music.  I came upon the work of Dale Robert Klous and felt it had the right kind of primordial nature vibe about it.  I approached Dale about allowing me to use some of his images, and not only did he agree, but he even went out and shot some additional material for the project.  I think his work is a great fit for my piece and I can’t thank him enough for his collaboration on the project.  Once I had the images, I synchronized them with the music in a way that reinforces the overall emotional/aesthetic impact of the work.
  • This piece is included on the Six Projects CD/LP released on the Innova Recordings label in 2015.

    De Novo was created in 2013 for multimedia artist, Lynn Fellman. Lynn strives to communicate discoveries in human evolution and genomic science through art and narrative. The title of this composition comes from Lynn’s work associated with research being done on the Neanderthal genome. De Novo literally means “something new” and refers to genetic mutations that all humans and their extinct cousins, the Neanderthals, are born with. The overall form of this composition was strongly influenced by input Lynn provided regarding our current understanding of the human genome and how it has developed over time.

    This is another of my fragment-based compositions, where all of the performances were recorded separately and then heavily edited and used as source material for the final compositional construction in the computer. It was my very great pleasure to work with the phenomenal Dave King on this project. He is an exceptionally gifted drummer, who showed up to the session in the middle of a snow storm at night, (God bless him). Heather is of course a true stalwart, who I’ve been fortunate enough to work with on a number of occasions. She is a first-call Contemporary Concert music percussion guru and a local treasure here in the Twin Cities.

  • Lizamander was written for Elizabeth McNutt. It is the second in a series of works for solo instruments and Max/MSP, the first of which was called Gerrymander, written for the clarinetist, F. Gerard Errante. The focus of both of these works is on interactivity and live audio processing. The computer captures material played by the solo instrument during the performance and uses that material (as well as some pre-recorded sounds) to generate a syncopated rhythmic accompaniment, while adding various effects to the sound of the flute. Since the computer is constantly “listening” to the flute, the tempo is somewhat flexible, which allows the performer considerable interpretive freedom.  Lizamander relies heavily on pitch tracking throughout the piece, not only for score following, but also for sample triggering, contrapuntal harmonization, and other “intelligent” effects. It relies even more heavily (as does Gerrymander) on having an extraordinary performer!
  • TaleSpin was commissioned by the Montague/Mead Duo (Philip Mead, Piano & Stephen Montague, Electronics). It is a short musical fantasy, written in a quasi-romantic style. It has something of a program, too, whose subject may be evident from some of the section titles: Telltale, Hot Topic, Blissful Ignorance, Morning After Songs, Still Spinning, and Picking up the Pieces. Many of the electronic sounds are processed sounds recorded inside the piano, included stopped and bowed notes, plucked and struck notes, prepared notes, etc. In the outer fast sections, it is similar to a piano 4 hands piece, with the computer responsible for the middle two “hands.” The computer part is relatively simple and accompanimental, however, while that played by the performer – the outer two hands – is soloistic and quite virtuosic.
  • Don’t Look Now for String Quartet and Electronic Sounds was commissioned by the Stony Brook Contemporary Chamber Players, who gave the world premier at Merkin Concert Hall in New York City on April 28, 1991.  The piece has since received numerous performances, including at the International Computer Music Conference in Montreal and the SEAMUS Conference in Urbana, Illinois, both in 1991.  It was subsequently performed throughout Europe and South America by the Smith Quartet, who made this recording at the Electroacoustic Music and Recording Studios of the Royal College of Music in London.

    Composers have long been fascinated by the “special effects” obtainable on traditional instruments, but have tended to use them sparingly, in part because many of them are very soft and/or difficult to control and produce reliably.  In this piece, I have used the electronic medium to amplify and extend some of the effects which can be produced on stringed instruments, such as col legno battuto, tremolando sul ponticello, snap pizzicato, left hand pizzicato, harmonic glissandi, etc.  In most cases, the effects are introduced first in the acoustic ensemble, then developed further in the electronic part.  Because of this, and also because the sounds on the tape are almost exclusively derived from recordings of real stringed instruments, it should not always be apparent to the listener whether a sound is coming from the quartet or from the speakers, and hence the title, Don’t Look Now.

    Many of the sounds in the electronic part were originally recorded by the cellist, Barry Sills, of Austin, Texas.  I then digitally processed these sounds at The University of Texas Electronic Music Studios in a variety of ways, using MIT’s CSound, Mark Dolson’s Phase Vocoder, and some of my own software.  The sounds were then loaded to Ensoniq EPS and Kurzweil K2000 samplers for real-time performance.
  • Hardware-based analog modular synthesizer music; a combination of vintage and contemporary analog modular synthesizers.

    Breathing Voltages is a purely electronic piece of music, which was created in 2014.  It uses as its source material, sound which has been generated on a combination of old and new analogue modular synthesizer components.  I chose this title, because there is a kind of “breathing” character to the music, which is generated through the application of continually varying control voltages articulating long amplitude and filter envelopes.  It is also somewhat evocative of wave action.

    The piece employs my fragment-based compositional process, wherein discrete musical gestures and textures are recorded and then used as source material for the creation of the finished work in the computer, through the use of extensive audio editing and signal processing.  One can hear shades of minimalism in the piece, and it also makes fairly extensive use of chance operations and what one could call controlled randomness, though always refined in the crucible of my own relentless drive to create aesthetically satisfying musical experiences.  It is structured in three clear sections, which segue into one another.

    This is a piece which celebrates its electronic character, and in particular the sound of analog (as opposed to digital) synthesizer timbres.  It never tries to evoke the timbres of traditional acoustic instruments.  In addition, I would consider this piece to be more on the beautiful side, though my aesthetic dark side does make its presence felt from time to time.  I was striving to remain somewhat more tonal, at least with most of the primary musical elements.  For example, there is a decidedly tonal pentatonic pitch set that is presented as randomly generated melodic material at the heart of the second section.
  • At times minimalist and pattern driven. At times warm and buzzy. Hardware-based electronic music.

    Implied Movement is an electronic music piece which I completed work on in February of 2015. It is included on the Six Projects album, which is available on Innova Recordings. The piece was created using a combination of vintage and contemporary analogue modular synthesizers and a vintage Minimoog D. All of the material was recorded into a computer, where the final composition was assembled using my fragment-based compositional process. The piece has as it’s primary organizational underpinning, a series of short repeating ostinatos, which are constantly evolving in one way or another. This is significant, as it is a bit of a departure for me. I tend to avoid loops like the plague. I don’t even like using repeat signs in my traditionally notated scores. I’ve done my share of copying and pasting within MIDI sequenced projects over the years, but even in that environment, I tend to try and play all the way through on each part most of the time. Not only does this encourage improvisational “comping”, but it also has the added benefit of infusing the individually performed parts with a lot of variation in (MIDI) velocity and pressure, which results in constant slight variations in volume and timbre. That’s what I’ve done a lot of in the past in my MIDIsequenced pieces, but the sequencing in this piece is accomplished using hardware-based sequencers. A different world entirely.

    Using short repeating patterns that evolve, also lends itself quite naturally to minimalism, the influence of which is clearly evident in the piece. There are also some chance operations which crop up in the form of the application of random voltage. This is particularly evident near the beginning of the piece at about 0:45, when the first quick note are heard.

    There is a lead synthesizer melodic part that makes an obvious entrance at about 3:30, which was created using the Minimoog, played through a Big Muff distortion box. The listener might also notice sustain-y, distorted electric guitar-like gestures in this piece, the first of which shows up at about 3:15. These were performed on my Moog Model 12 modular synthesizer using the Big Muff and a device called a Talk Box. The Talk Box is a small metal box with a speaker in it that sends the sound up a flexible plastic tube. The tube is placed in the mouth, which is in front of a microphone. The sound comes through the tube into the mouth, where it is shaped in realtime and picked up by the mic. I didn’t use this device to make the synth “talk”, but rather to shape and filter the sound with my mouth. Both the Big Muff and the Talk Box and traditional electric guitar effect boxes, which is why my Moog playing comes off as being at least evocative of the electric guitar.

    The short repeating patterns were a lot of fun to work with, perhaps because I had so assiduously avoided their use in the past. The end result reminds me in places of 1970s vintage Tangerine Dream. Actually, the whole piece has a kind of “old-fashioned” feel about it. But then again, I’m no spring chicken. I really love the warm old buzzy analogue sound of this piece. Even though it makes use of strictly repeating machine like sequences being generated by electronic instruments, it still retains a human, and in my opinion, “musical” feel.
  • Live/studio hybrid composition using hardware-based electronic instruments.

    I completed work on this project in the Winter of 2016. It’s a fragment piece, which uses as it’s source material a recording of the live electronic music performance I did at my Six Projects record release party, and also some recordings of the rehearsals for that performance. I deliberately chose to NOT use a computer in this live performance, which was something new for me. I’m calling the piece “North Loop” for several reasons: 1.) The live performance took place in a part of downtown Minneapolis called the North Loop, 2.) though the piece is not particularly loop-intensive, it does make use of some loops, and 3.) I live and work in Minneapolis – a Northern city.

    The attached video is of the live performance. The music you’re hearing is the actual finished piece, which was created later in a computer, using audio from the performance and from rehearsals for that performance. There are multiple layers of audio, which have been highly edited and mixed to create the finished product. I edited the video to more or less line things up to match as best I could, but it’s really just an approximation. There was a lot of improvisation involved in the performance, and the finished music actually has more simultaneous layers than I would have been able to pull off live as a solo performance. Still, it’s nice to see the video with the music and I also like that we have some documentation of Paul Christian’s visual projection work. He was using the Processing software environment to create the imagery. I was feeding him about eight separate audio lines that he was working with live.
  • Experimental electronic drone piece created from material generated on an ARP 2500 modular synthesizer.

    Chamber of Mechanisms is a fragment-based electronic music piece, which uses audio generated on a vintage ARP 2500. This is a very rare analog modular synthesizer. They didn’t make very many of them and only a very small number still exist. The particular instrument I’m using for this project is installed in the electronic music lab at the University of Minnesota, Minneapolis. The instrument was there back in the early ’80s when I was a composition student at the U. of M., but it was completely out of commission. I’m so glad it didn’t wind up on the scrap heap, as has happened to so many of these exotic old instruments. It is now functioning and usable, and I was very excited to be able spend some quality time with this beauty. Many thanks to Michael Duffy who heads up the electronic music lab at the U. of M. for allowing me access to it. I spent an afternoon working with the instrument and recording a bunch of material that I then took back to my studio. There I transferred the audio into my computer and began work on constructing the finished piece. It sounds somewhat different than anything else I’ve ever done. This is no doubt due to the fact that the instrument itself has its own very particular sonic characteristics.
Output Formats

atom, dcmes-xml, json, omeka-xml, rss2