Author's note: This is the version of this article that appeared in Sound On Sound magazine which, for space reasons, was much shorter than the author's original. To read the original click here.
Musicians & What They'll Be Playing
Controllers Of The FuturePublished in SOS January 2006
The New Interfaces for Musical Expression conference has been running for five years, and is a great place to see and discuss new ideas that may provide the musical controllers of the future. SOS was in Vancouver to learn more...
by Paul D Lehrman
Where are the musical instruments of tomorrow going to come from? Surely, now that every computer and game console has more musical capabilities than an entire synth studio of 10 years ago, there should be a flood of new electronic instruments coming from factories worldwide. But making new instruments is risky: you have to design them, build them, and perhaps hardest of all, teach people how to play them —and not lose your shirt in the process.
Fortunately, there are plenty of people in universities and research facilities who are employing new, cheaper technologies, and are hard at work thinking up and building the next several generations of electronic music controllers. How do you find these people? One way is to go to their conference: New Interfaces for Musical Expression, or NIME. This year's NIME, held at the University of British Columbia in Vancouver, was the fifth, and it offered plenty to see for the musicians and scientists who showed up from all over the Americas, Europe, and Asia. There are some very creative people out there—and some of them are really out there.
The tone of the conference was set by Bill Buxton, a Canadian who has been one of the pioneering forces in both computer music and video for the last 25 years, and who was one of the first to propose the concept of 'gesture controllers' as musical instruments at another conference in Vancouver over 20 years ago (see the 'Digicon 83' box below). At the start of his keynote speech he threw up a slide of an old Revox reel-to-reel tape deck and proclaimed, "This is the enemy."
The problem with electronic music concerts historically, he said, is that someone would walk onto a stage, push a button on a tape deck, and walk off. Today, people do the same thing with laptops. "If you're sitting at a computer and typing, why should I care?" he lamented. If we're going to a live performance, he opined, we want to see the performer doing something interesting to create the music. "The goal of a performance system," he stated, "should be to make your audience understand the correlation of gesture and sound."
Another keynote speaker was Don Buchla, who has been actively developing new electronic instruments for four decades. He presented a comprehensive history of electronic performance instruments, including many less well-known devices like the Sal-Mar Construction. Built in the late 1960s by composer Salvatore Martirano, it resembled a giant switchboard surrounded by 24 ear-level speakers, and had almost 300 switches so sensitive they could be operated by brushes.
Buchla also discussed a number of his own designs, some of which made it to market and some didn't. One of these was an EEG-to-control-voltage converter that responded to alpha waves. "I did a performance in Copenhagen," he recalled, "but while I was up on stage, I found I couldn't generate any alpha waves, so I didn't make a sound for 15 minutes. It was the longest silence I've ever heard."
The three-day conference was packed with stuff to do and see: there were three dozen papers and reports, five large interactive sound installations, and four demo rooms showing a wealth of gadgetry. There were also jam sessions, which bore more than a passing resemblance to the cantina scene in Star Wars: Episode IV, and each night there was a concert.
The presentations were primarily about new one-of-a-kind instruments and performance systems, as well as new ways of thinking about performance. Some of the instruments were variations on conventional instruments. For example, Dan Overholt of the University of California Santa Barbara showed his Overtone Violin, a six-string solid-body violin with optical pickups, several assignable knobs on the body, and a keypad, two sliders, and a joystick where the tuning pegs usually go. In addition, an ultrasonic sensor keeps track of the instrument's position in space. It's played with a normal violin bow, but players wear a glove containing ultrasonic and acceleration sensors which allow them to make sounds without ever touching the strings. The whole thing is connected to a wireless USB system so performers don't have to worry about tripping over cables.
Then there was the Self-Contained Unified Bass Augmenter, or SCUBA, built by researchers at Stanford University. It starts with a tuba, adding buttons and pressure sensors to the valves. The sound of the instrument is picked up by a mic inside the bell, and sent through various signal processors like filters, distortion, and vibrato, which are user-configurable and controllable. The processed sound is then piped through four bell-mounted speakers and a subwoofer under the player's chair.
A hotbed of development for new musical toys has been the Media Lab at Massachusetts Institute of Technology. One of the more well-known of these is the Beatbug, a hand-held gadget with piezo sensors that respond to tapping. A player can record a rhythmic pattern and then change it while it plays using two bendable 'ears'. An internal speaker makes players feel they are actually playing an instrument. But Beatbugs work best in packs, as Israeli-born Gil Weinberg, who is an MIT graduate and now director of a new music-technology program at Georgia Tech, demonstrated in a new system called 'iltur'. In this system, multiple Beatbugs are linked to a computer through Max/MSP software, and produce various sounds from modules in Propellerhead Reason. The Max program allows interaction and transformation of the input patterns, encouraging the players to bounce musical ideas off each other.
Some of the presentations used interfaces borrowed from completely different fields to produce musical sounds. Golan Levin, also recently of MIT and now at Carnegie Mellon University in Pittsburgh, Pennsylvania, described in his keynote speech his famous 'Dialtones: A Telesymphony'. In this project, some 200 members of the audience have their cell phones programmed with specific ringtones and are told where in the hall to sit. A computer performs the piece by dialling the phones' numbers in a pre-programmed order.
Describing himself as 'an artist whose medium is musical instruments' while admitting that he can't read or write a note of music, Levin told the audience, "Music engages you in a creative activity that tells you something about yourself, and is 'sticky'—you want to stay with it. So the best musical instrument is one that is easy to learn and takes a lifetime to master." He also made the point that, "The computer mouse is about the narrowest straw you can suck all human expression through."
As an example of a means of performing music that many non-musicians can master, Levin showed his 'Manual Input Sessions' project. Using an old-fashioned overhead projector and a video projector, the player of this system makes hand shadows on a screen, while a video camera analyses the image. When the fingers define a closed area, a bright rock-like object filling the area is projected, and when the fingers open, the rock 'drops' to the bottom of the screen. As it hits, it creates a musical sound whose pitch and timbre are proportional to the size and the speed of the falling rock. It's fascinating to watch.
Music By Any Other Name
Some of the systems shown at NIME didn't require the 'player' to use any hardware at all. A group from ATR Intelligent Robotics in Kyoto, Japan, showed a system for creating music by changing one's facial expressions. A camera image of the face is divided into seven zones, and a computer continuously tracks changes in the image, triggering different MIDI notes in response.
And then there was 'Bangarama: Creating Music With Headbanging', from Germany. This extremely low-budget project uses a guitar-shaped plywood controller. Along the neck are 26 small rectangles of aluminum arranged in pairs, which are used to select from a group of pre-recorded samples, most of them guitar power chords. Players wrap aluminum foil around their fingers so that moving them along the neck closes one of the circuits. The headbanging part consists of a coin mounted on a metal seesaw-like contraption, which is attached to a Velcro strip on top of a baseball cap. When players swing their heads forward, the metal piece under the coin makes contact with another metal piece in front of it, closing a circuit, which triggers the sample. Moving your head back up breaks the circuit, and ends the note.
There was much, much more at NIME, but the last item I have room to mention here is McBlare, a MIDI-controlled bagpipe designed by Roger Dannenberg of Carnegie Mellon University as a humorous side-project. Based around a genuine set of bagpipes, McBlare incorporates a computer-controllable air compressor which will precisely match the breathing and arm pressure of a human piper, and a set of electrically controlled mechanical keys on the 'chanter' pipe. Under the control of an old Yamaha QY10 sequencer, McBlare could not only do a convincing imitation of a real piper, but could create frenetic, complex riffs no human could (or would ever want) to play. At the end of Dannenberg's perfomance, a wag in the audience (OK, it was me) asked him, "Why are bagpipers always walking?" and straight away, he responded correctly: "To get away from the sound!"
The NIME conference wasn't the biggest, or the longest, but it was certainly among the most informative conferences I've ever attended. As happens at any good meeting of like-minded creative types, I made new friends, heard new theories and concepts, saw some amazing performances, and most importantly, left with my head buzzing with things to try out in my own work. The NIME conference is not for everybody—but all of us who work with music and electronics will be hearing from the people who were there in years to come.
Sound On Sound, Media House, Trafalgar Way, Bar Hill, Cambridge CB3 8SQ, UK.
Email: email@example.com | Telephone: +44 (0)1954 789888 | Fax: +44 (0)1954 789895
Copyright 2006 © Paul D. Lehrman and SOS Publications Group. All rights reserved.