Author's note: This is the original manuscript of this report. It was written for magazine but for space reasons, they had to publish a much shorter version. You can read the Sound on Sound article here.


Photos (L-R): Dan Overholt, Bernardo Escalona Espinosa, Gil Weinberg, Bernardo Escalona Espinosa, Juan Pablo Cáceres
CLICK ON ANY PHOTO TO ENLARGE
Tomorrow's Virtuosi & What They'll Be Playing
SIDEBARS:
The Tools that Make This Possible
What Do You Do With All This Stuff?
The Last Time I Saw Vancouver: Digicon83
The Concerts at NIME
A report from the fifth New Interfaces for Musical Expression conference, in Vancouver, Canada, May 2005

by Paul D. Lehrman
Photos, except as noted, by Bernardo Escalona Espinosa

Where are the musical instruments of tomorrow going to come from? Now that every computer and game console has more musical capabilities than an entire average synthesizer studio of ten years ago, it seems that there should be a flood of exciting new electronic musical instruments coming out of the factories of America, Europe, and Asia. But making new instruments is a risky business: you have to design them, manufacture them, bring them to market, and perhaps hardest of all, teach people how to play them—and not lose your shirt in the process. That’s why the number of "alternative" musical controllers from major companies that have stayed in production for more than a couple of years can be counted on one hand, leaving enough fingers left over to play "Mary Had a Little Lamb," and from the companies’ standpoint, not much has changed.

Fortunately, there are plenty of inventive people in universities and research facilities who are taking advantage of new and cheaper technologies, and are hard at work thinking up, building, and performing with the next several generations of electronic music controllers. How do you find these people? One way is to go to their conference: "New Interfaces for Musical Expression," or "NIME." This year’s NIME was the fifth, and it was held on the gorgeous seaside campus of the University of British Columbia in Vancouver. It wasn’t a large conference—about 180 participants—and it wasn’t cheap: registration cost up to CDN$475 per person, which didn’t include accomodations or airfare. But the musicians and scientists from all over the Americas, Europe, and Asia who did show up got an eyeful and an earful. There are some very creative people out there—and some of them really out there.

Music and video artist Bill Buxton on stage at NIME, 'then' (above) and 'now' (below).
Keynotes from History

The tone of the conference was set by Bill Buxton, a Canadian who has been one of the pioneering forces in both computer music and video for the last two and a half decades, and who was one of the first to propose the concept of "gesture controllers" as musical instruments at another conference in Vancouver over 20 years ago (see Sidebar 4: The Last Time I Saw Vancouver). At the start of his keynote speech he threw up a slide of an old Revox reel-to-reel tape deck and proclaimed, "This is the enemy."

The problem with electronic music concerts historically, he said, is that someone would walk onto a stage, push a button on a tape deck, and walk off. Today, people do the same thing with laptops. "If you’re sitting at a computer and typing, why should I care?" he lamented. "With digital control, we have our machines taking over our performances. But that’s throwing out the baby with the bathwater." If we’re going to a live performance, he opined, we want to see the performer doing something interesting to create the music. "The goal of a performance system," he stated, "should be to make your audience understand the correlation of gesture and sound."

Another keynote speaker was Don Buchla, who has been actively developing new electronic instruments for four decades, and whose original analog synthesis systems are still cherished by many composers. He presented a comprehensive history of electronic performance instruments, starting with the Musical Telegraph designed in 1876 by Elisha Gray, who he said may have been the real inventor of the telephone, "but Alexander Graham Bell beat him to the patent office." Among the less-familiar devices he showed was the SalMar Construction, built in the late 1960s by composer Salvatore Martirano, which looked like a giant switchboard with almost 300 switches so sensitive they could be operated by brushes, and surrounded by 24 ear-level speakers.

"The purpose of a controller is to transfer a performer’s intent into sound," he said. "What we as designers are concerned with is linkage. Acoustic instruments have no linkage, but keyboards, even acoustic ones, do. MIDI allowed us to get a gesture controller from one manufacturer, and link it with a sound vocabulary from another."

The Tools That Make This Possible
As you may gather, there’s a lot going on at the college level in the world of new electronic musical instruments. One of the driving forces behind this movement is that not only do computers continue to become exponentially more powerful and less expensive, but also the tools for building custom performance systems have gotten cheaper, more plentiful, and easier to use.

On the hardware side, there are now readily available a host of different sensors that respond to various physical actions, developed for high-tech industrial, security, and medical applications, but easily adaptable to musical ones. Among these are force-sensing resistors (FSRs) such as you would find in MIDI drum pads; flex sensors that change resistance as you bend them, like the fingers in a Nintendo Power Glove; piezoelectric elements, which send a voltage when struck, flicked, or vibrated; tilt switches; magnetic-sensing reed switches; airflow sensors; accelerometers; strain gauges; and infrared and ultrasonic distance sensors. Slightly more expensive, but within the budgets of many education and research labs, are electromyography (EMG) sensors, which detect muscular tension. Similarly, the opamps needed to make these sensors generate meaningful data are dirt-cheap and easy to find, and diagrams and tutorials for hooking these devices together are published on the Internet by their manufacturers.

On the software side, most of the presenters used one of two musical "toolkits," both of which are the brainchildren of an American mathematician and musician named Miller Puckette. Max, for Macintosh, is the older and more sophisticated of the two, and was written originally in the 1980s when Puckette was at the Paris research institute IRCAM. Max, now called Max/MSP, is a commercial program, frequently updated by and available from the San Francisco company Cycling ‘74 (who were one of the sponsors of this year’s NIME). Since the mid-’90s Puckette has been on the faculty of the University of California Santa Barbara and a few years ago released a freeware program for Mac, Windows, Linux, and Irix called Pure Data, or "Pd," which is effectively a simplified version of Max, although without any fancy user-interface tools or formal tech support—but at a price even university students can easily afford.

Buchla discussed a number of his own instrument designs, some of which made it to market and some didn’t. There was Lightning, a pair of batons with infrared sensors that could be mapped to a wide variety of musical gestures; Thunder, a touch-surface that measured independent finger-position and pressure; Wind, a disc that looked something like an electric turtle, with a breath sensor that responded to both positive and negative (sucking) pressure, a tongue sensor, eight holes for fingers, and sensors for rotational and absolute position, which sadly didn’t make it past the prototype stage; and Rain, based on an African rainstick, which Buchla said was designed to make "sonic clouds and densities, faster than MIDI" but which he never finished. He also described an EEG-to-control voltage converter that he designed some years ago, which responded to alpha waves. "I did a performance in Copenhagen," he recalled, "but while I was up on stage, I found I couldn’t generate any alpha waves, so I didn’t make a sound for 15 minutes. It was the longest silence I’ve ever heard."

Oddly enough, only one of Buchla’s instruments was actually at the conference: a Marimba Lumina, which was given a fine demonstration by Joel Davel. It turns out, as Davel told me later, that all the gear Buchla was planning to bring up from his home in San Francisco was confiscated at the border. His car full of musical instruments, as well as three other men, aroused the suspicions of a customs officer. "The customs guy didn’t believe we were coming to a conference," Davel said. "He figured we must be a band, and we were going to play some gigs, and we didn’t have a work permit for that. So to make sure we didn’t play in a bar somewhere, he held onto all our stuff." The Marimba Lumina was supplied by the Audities Foundation, in Calgary, Alberta, on the right side of the border.

Old-new and New-new Instruments

The three-day conference was packed with stuff to do and see: there were three dozen papers and reports, along with 18 informal "poster" presentations, five large interactive sound installations, and four demo rooms showing a wealth of gadgetry and software. There were also jam sessions, which bore more than a passing resemblance to the cafe scene in Star Wars: Episode IV, and each night there was a concert in the University’s recital hall (see Sidebar 2).

The presentations were primarily about new one-of-a-kind instruments and performance systems, as well as new ways of thinking about performance. Some of the instruments were variations on conventional instruments. For example, Dan Overholt of the University of California Santa Barbara showed his "Overtone Violin," a six-string solid-body violin with optical pickups, several assignable knobs on the body, and a keypad, two sliders, and a joystick underneath the scroll where the tuning pegs usually go. In addition, a miniature video camera is mounted at the top of the neck, and an ultrasonic sensor keeps track of the instrument’s position in space. It’s played with a normal violin bow, but the player wears a glove containing ultrasonic and acceleration sensors which allow him to make sounds without ever touching the strings. The whole thing is connected to a wireless USB system so the performer doesn’t have to worry about tripping over cables.

Then there was the "Self-Contained Unified Bass Augmenter," or "SCUBA," built by researchers at Stanford University. It starts with a tuba, adding buttons and pressure sensors to the valves. The sound of the instrument is picked up by a microphone inside the bell, and sent through various signal processors like filters, distortion, and vibrato, which are user-configurable and -controllable. The processed sound is then piped through four bell-mounted speakers and a sub-woofer under the player’s chair.

Q: What Do You Do With All This Stuff?
A: Controllers & Games
The market for new musical instruments is not an easy one to break into. Very few of the instruments shown at NIME will ever make it into commercial production. But there is a strong market for the ideas being presented: "An opportunity for everyone in this room," as one speaker explained.

The speaker was Tina "Bean" Blaine, a percussionist, vocalist, inventor, "musical interactivist" (at Interval Research, the exciting but short-lived Silicon Valley company founded by Microsoft billionaire Paul Allen), and student of African drumming, who is now on the faculty of the Entertainment Technology program at Carnegie-Mellon University.

Blaine’s presentation was called "The Convergence of Alternate Controllers and Musical Interfaces in Interactive Entertainment," which translates to "Everything you’re doing, the game industry can use." There are a number of categories of games that have interactive musical components, she explained, and they are growing all the time. The most common are beat-matching games, starting with the mid-’90s arcade game BeatMania, and evolving into Konami’s Dance Dance Revolution and Namco’s Donkey Konga. But there are also games that require one of a variety of specific actions by the player in response to musical cues, like Sega’s Samba de Amigoand Hasbro’s ZingIt, and those in which the player pretends to play a musical instrument like Sony’s Guitar Freaks and Koei’s Gitaroo-Man, or sings, like Harmonix Music Systems’ Karaoke Revolution. And there are other games that make music when the player moves his body or hands in free space, like Sony’s Groove and Discovery’s Motion Music Maker.

Blaine went on in some depth about what makes a game successful, describing a "regime of competence" which allows the player to simultaneously experience both frustration and pleasure, and motivates him to move on to progressively more difficult levels. She also discussed the positives and negatives of multiplayer games, those that require physical immersion, and those that are played out in front of an audience—which can create an experience similar to performing music.

But her most salient point was that none of the game manufacturers develop their own controllers: all of their technology is licensed from someone else. And who better to develop that technology than the musicians and scientists who are tinkering away in their labs trying to build the next generation of electronic musical instruments?

Perry Cook, of Princeton University, who started his career as a singer and then got his doctorate in Electrical Engineering, was in one of the demo rooms showing his outrageous controllers for synthesizing the human voice, many of which also started as conventional instruments. There were two "Squeezevoxen:" Lisa, based on an accordian, and Maggie, based on a concertina. ("Bart" and "Santa’s Little Helper" were the names of earlier models, showing how much The Simpsons has inflitrated high-tech culture). Lisa’s piano-style keyboard has a pressure sensor next to it to control pitch and overtones, and the 64 buttons on the opposite side trigger phonemes, words, and phrases. Four bend sensors control formant filters, and a small trackpad simulates vowels by changing filter resonance. The bellows, of course, are the equivalent of the lungs and control volume, and a vent button can be used to stop the vocalization and generate breath noises. Lisa is quite a bit simpler, but also impressive, and it has internal speakers, which helps contribute to the illusion that the instrument is really singing.

Another keyboard-based instrument of Cook’s was the attractively-named VOMID, or Voice-Oriented Melodica Interface Device, in which a modified Korg MicroKontrol keyboard, with 37 notes, 16 buttons, 16 potentiometers, a joystick, a ribbon controller, an extra touchpad, and an accelerometer, is attached to the two-way breath sensor of an old-fashioned Melodica.

A hotbed of development for new musical toys has been the Media Lab at Massachusetts Institute of Technology. One of the more well-known of these is the Beatbug, a hand-held gadget with piezo sensors that respond to tapping. A player can record a rhythmic pattern, watch it displayed on internal LEDs, and then change it while it plays using two bendable "ears."An internal speaker makes the player feel he or she is actually playing an instrument. But Beatbugs best work in packs, as Israeli-born Gil Weinberg, who is a graduate of the MIT program and now director of a new music technology program at Georgia Tech, showed in a new system called "iltur," In the system, multiple Beatbugs are connected to a computer through Max/MSP software (see Sidebar 2: The Tools That Make This Possible), and produce a variety of sounds from Reason modules. The Max program allows a high degree of interaction and transformation of the input patterns, which encourages the players to bounce musical ideas off each other, and it is designed to let both novices and experienced musicians improvise together with equal comfort levels.

If you’ve ever wanted a set of MIDI wind chimes, you’d be interested in the "Swayaway" from a group at New York University. Ambient sound from the room is continuously picked up with a microphone, analysed and processed by Max/MSP and converted to MIDI, which is sent to a module containing a number of pre-programmed sounds. Seventeen vertical plastic stalks, each with a wooden weight at the top, are attached to a base containing flex sensors. The action of the sensors is translated into MIDI controllers which affect the sound, and also cause green LEDs in the base to brighten in proportion to the amount of bend of each rod. Played by waving one’s hands among the stalks, it’s a soothing and involving experience.

Lest it be said that English researchers in electronic musical instruments are without humor, John Bowers and Phil Archer, of the University of East Anglia, presented their concept of "Not Hyper, Not Meta, Not Cyber but Infra-Instruments:" reductionist musical devices that could fit into one of several dubious categories. For example, using the philosophy "Take an Instrument and Make it Less," they showed how one can use a smashed violin to make music: scrape the strings lengthwise or make them "thunk," and use the wooden shards as scrapers or beaters. Under the "Take Something Non-Instrumental and Find the Instrument Within" banner, they showed the "Victorian Synthesizer," which uses a battery, a bare wire, and a small speaker, set up so that scraping the wire along a surface causes the cone of the speaker to fluctuate in step with the contour of the surface. To enhance the effect, a number of metal washers are put inside the speaker to rattle.

Other categories they described included "Take Materials Partway to Instrumenthood" and "Build an Instrument But Include Obvious Mistakes." Other infra-projects they showed included the Strandline guitar, comprising strings, a pickup, a whammy bar, and a bunch of odd objects found on a beach at low tide; and the Percussive Printer, in which ball-point pens are mounted on motors removed from an inkjet printer so that they act like small drumsticks, and the motors are sent text information as if they were still inside the printer, but instead they wavethe pens around and produce various bangs, rattles, and scratches.

Golan Levin of Carnegie Mellon University demonstrating his projector-based Manual Input Sessions project on stage.
Music By Any Other Name

Some of the presentations used interfaces borrowed from completely different fields to produce musical sounds. Golan Levin, also recently of MIT and now at Carnegie Mellon University in Pittsburgh, Pennsylvania, described in his keynote speech his famous "Dialtones: A Telesymphony," which had its first performance in 2001 at the Ars Electronica Festival in Linz, Austria. In this project, some 200 members of the audience have their cell phones programmed wit specific ringtones and are told where in the hall to sit. A computer performs the piece by dialing the phones’ numbers in a pre-programmed order. In addition, spotlights in the ceiling point to a seat location when a phone there is ringing. "There’s an accuracy of plus-or-minus about a half second," Levin noted.

Describing himself as "an artist whose medium is musical instruments," while admitting he "can’t read or write a note of music," Levin told the audience, "Music engages you in a creative activity that tells you something about yourself, and is ‘sticky’—you want to stay with it. So the best musical instrument is one that is easy to learn and takes a lifetime to master." He also made the point, which had attendees nodding in agreement, that, "The computer mouse is about the narrowest straw you can suck all human expression through."

He next showed his "Manual Input Sessions" project. Using an old-fashioned overhead projector and a video projector, the player of this system makes hand shadows on a screen, while a video camera analyzes the image. When the fingers define a closed area, a bright rock-like object filling the area is projected, and when the fingers open, the rock "drops" to the bottom of the screen. As it hits, it creates a musical sound, whose pitch and timbre are proportional to the size and the speed of the falling rock. It’s fascinating to watch, and no doubt easy to learn.

The Last Time I Saw Vancouver: Digicon 83
Bob Moog prepares to demonstrate the revolutionary new technology 'MIDI' in Vancouver, 1983.
Photo: Paul Lehrman
Vancouver, British Columbia, Canada’s third-largest metropolis, may be the most beautiful city in North America. It is stunningly situated on flatlands between the Strait of Georgia, with its myriad islands large and small, and the Coastal Mountain range, some of whose peaks approach 5000 feet.

It’s one of the most international cities on the continent as well. It has always had a large Asian population, and the size of its Chinatown rivals that of San Francisco’s. But in the 1990s the Asian population experienced a huge growth, fueled largely by the imminent takeover of Hong Kong. Capital flowed in with the people: in 1996, 80% of all homes in the area worth more than $1 million were bought by Asian immigrants. Today the majority of the residents are of Asian descent.

In the same years, the film industry in Vancouver blossomed, in large part due to the weakness of the Canadian dollar against the US dollar, and also thanks to lower wages and less-strict work rules than in Hollywood. Not to mention the weather: when the temperature in Vancouver hits 27° C, which it did every day of the NIME conference, it’s considered a heat wave.

For me personally, Vancouver has long been a symbol of dramatic change in the music industry. I was there once before, in 1983, at a meeting billed as the First International Conference on the Digital Arts, or "Digicon," which was one of the most important events of its time. Before computers had taken over NAMM and MusikMesse, before movies had more digital actors than human ones, even before Compact Discs had replaced LPs in record stores, Digicon showcased many of the technologies that would revolutionize every part of our industry.

Artist Alvy Ray Smith showed and described the technology behind the first long-form CGI sequence ever included in a commercial film: the "Genesis Project" video from the recent Star Trek II: The Wrath of Khan. He described a machine his group at Lucasfilm was in the process of building for digital film editing and compositing, to be called "Pixar." The machine was never finished, but its principles were used to create Avid and every other non-linear video editing platform.

Andy Moorer, also of Lucasfilm, described the SoundDroid, a digital multichannel editing and mixing system that would make tape obsolete. Capable of recording almost two hours of eight-track audio, its price tag of $700,000 seemed like a bargain. The SoundDroid also never came out, of course, but Moorer went on to found Sonic Solutions, makers of the first high-end digital audio editing and mastering system, and thanks to ProTools and similar systems, the technology his group developed soon utterly changed audio production.
Who else was there? Bob Moog, Herbie Hancock, Bill Buxton (who first articulated the concept of "gesture controllers" at Digicon), Todd Rundgren, and pioneering computer composers Barry Truax and Herbert Brün.

And what else was there? The first Fairlight CMI with hard-disk recording, PPG’s Wave, Roland’s CompuMusic 800R six-voice hardware sequencer/synth, and pre-release models of the Yamaha DX7 and DX9. Oh yes: and a demonstration by Moog, which caused jaws to drop in astonishment and delight all through the lecture hall, of a brand-new technology called "MIDI."


You can read a full report on Digicon 83 at http://paul-lehrman.com/digicon.

Another borrowing, after a fashion, from the world of visual art was the "Scrubber" from a team at the Media Lab Europe in Dublin. Described as "a general controller for friction-induced sound," the device is based around an ordinary white-board eraser fitted with two tiny microphones and a force-sensing resistor. The system analyzes the sound created by the eraser oas it rubs against a surface, and applies it to a wavetable or sample, playing it in granular fashion. The sound controls the sample’s speed, volume, or other parameters, and the system can even detect whether the eraser is going backwards or forwards. One convincing demo used a garage door closing: it was cool to see its speed and direction being controlled in real time. Sadly, the Dublin Media Lab closed down the day after the authors submitted this paper, a victim of funding problems.

Some of the systems shown didn’t require the "player" to use any hardware at all. A group from ATR Intelligent Robotics in Kyoto, Japan, showed a system for creating music by changing one’s facial expressions. A camera image of the face is divided into seven zones, and a computer continuously tracks changes in the image, so that any change in any of the zones results in a particular MIDI note being triggered. "The displacement in a region," said the presenter, "controls the flow of a particular synth parameter."

Niels Böttcher and his team from Aalborg University in Copenhagen described a work in progress in which they are attempting to use interactive music to connect strangers at a train station. "People waiting for trains in Denmark usually don’t communicate with each other," said Böttcher, "and this has the potential to make them aware of the people across the tracks." Two video motion-tracking systems are focussed on a 2- by 3-meter rectangular area on each platform, and movement detected in those areas is fed to a computer which produces the sound, which is then piped through speakers on both platforms. But Böttcher and his group found that the choice of what kind of sounds and controller mappings would engage people the best, and make them aware of each other, isn’t an obvious one. They’ve experimented with having movement on the two platforms affect a single sound, but in different ways, as well as with having them produce sounds independently, and also with linking the two inputs so that movement in one platform by itself does nothing until there is also movement in the other. "You don’t want to require people to make weird gestures to get interesting sounds," Böttcher said, "but at the same time you don’t want to make the system so simple that they would get bored and just walk away."

Hands and Heads

Another work in progress is the HandySinger from ATR Labs and Nagoya University in Japan, which uses a hand puppet to control the tone, and more importantly, the emotional content of a sampled singing voice. A woman was recorded singing a Japanese nursery rhyme using four different types of voice character, which the researchers called "normal," "dark," "whisper," and "wet." They wrote software that could morph in real time from one recording to another. Then they constructed a hand puppet that had flex sensors on both sides of the thumb, forefinger, and middle finger, plus one on the back of the wrist, and two touch sensors on the tips of the thumb and middle finger that would sense when the puppet "clapped his hands." By mapping different combinations of sensor readings to different voice qualities, they were able to create a fairly convincing correspondence between the body language of the puppet and the emotional tone of the voice.

Bangarama

And then there was "Bangarama: Creating Music With Headbanging," from Aachen University in Germany. This extremely low-budget project uses a more-or-less guitar-shaped controller made out of thin plywood. Along the neck are 26 small rectangles of aluminum arranged in pairs, which are used to select from a group of pre-recorded samples, most of them guitar power chords. The player wraps aluminum foil around his fingers so that moving his fingers along the neck closes one of the circuits. The headbanging part consists of a 50-pfennig coin mounted on a metal seesaw-like contraption, which is attached to a Velcro strip on top of a baseball cap. When the player swings his head forward, the metal piece under the coin makes contact with another metal piece in front of it, closing a circuit, which triggers the sample. When the head goes back up, the note ends. One problem the designers had was that the mechanism would sometimes bounce on the downswing, creating false triggers, and so they put in a timer which would ignore new notes that were less than 250 ms later than previous notes. "We predicted that typical users would not headbang faster than 240 beats per minute," they said—and who could argue with that?

NIME is not a trade show, and so there were few commercial products on display, but there was one that drew a lot of attention: Lemur, a marvelously versatile combination touch surface and LCD screen made by a French company called JazzMutant. Users can set up buttons, faders, knobs, and moving colored balls on the 12-inch surface, all of which can respond to multiple fingers simultaneously. It communicates with a host computer over Ethernet, and is compatible with any software that conforms to the Open Sound Control protocol. (MIDI users can use Max to translate between the two protocols.) Lemur comes with its own graphic editing software, which is especially important since it has no non-volatile memory. It looks like it could be a brilliant addition to any studio, although the price of $2500 will no doubt limit its market a bit. It should be shipping by the time you read this.

No Humans at All

Finally, as a counterpoint to the human-played instruments, several presenters showed instruments that played automatically. One of them, although it had no relationship to the French product, was also called "LEMUR," but in this case the word is an acronym for "League of Electronic Musical Urban Robots." Eric Singer spoke for the New York City-based group who make musical robots equipped with small motors, cams, and various striking and spinning devices that vibrate, bang, scrape, pluck, slide, and spin, all under MIDI control. Most of the robots are small, but the centerpiece of the family is the ForestBot, which consists of 25 ten-foot vertical stalks with egg-shaped rattles at the end, which are slowly spun in eccentric orbits by motors in their bases. The rattles sway and knock into each other, while the rods bounce and hum. Singer had a couple of the robots present, but more impressive was a kinetic video of the whole family playing together at an installation built earlier this year at a college in Southern California.

And last but not least, thanks to Roger Dannenberg of Carnegie Mellon University, we were presented with "McBlare," a MIDI-controlled bagpipe. This was certainly a fun project for Dannenberg, who has been involved in the more serious sides of computer music for over 20 years. "The founder of the university, Andrew Carnegie, was born in Scotland," he explained, "and the school has its own Tartan." In fact, The Tartan is the name of the school newspaper. So using the formidable resources of the university’s School of Computer Science and Robotics Institute to make a robotic bagpipe player seemed eminently logical.

McBlare starts with a genuine set of Highland bagpipes. The two mechanical tasks the team faced were building an air compressor that would precisely match the breathing and arm pressure of a human piper, and another was outfitting the "chanter" pipe with electrically-controlled mechanical keys. The result looks like a hospital respirator hooked up to a soprano saxophone with solenoids. The chanter’s keys had to respond at least as fast as a player’s fingers, and in fact Dannenberg’s team were able to make it perform even better than that: it can execute about 25 notes per second. Just as crucial a task was analyzing how a professional piper plays the thing—and the group did a great job with this too. Their robot, under the control of an old Yamaha QY10 portable sequencer, could not only do a convincing imitation of a real piper, but could then go several steps further into frenetic, complex riffs no human could ever—or would ever want to—play.
Dannenberg said that McBlare’s first public performance was at the graduation exercises of the School of Computer Science. At the end of his demonstration some wag in the audience (okay, it was me) asked, "Why is that bagpipers are always walking?" and Dannenberg immediately responded with the correct answer: "To get away from the sound."

The NIME conference wasn’t the biggest, or the longest, but certainly among the most intense and informative conferences I’ve ever attended. Even with so many things to do, you didn’t want to miss anything. As happens at any good meeting of like-minded creative types, I made new friends, heard many new theories and concepts, saw some amazing performances, and most importantly left with my head buzzing with things to try out in my own work and to pass along to my students. The NIME conference is not for everybody—but all of us who work with music and electronics will be hearing from the people who were there in years to come. •

All of the papers presented at NIME05, complete with illustrations,
as well as concert programs, videos, and other information,
can be found at hct.ece.ubc.ca/nime/2005.

NIME06 will be held at IRCAM in Paris, June 5-7, 2006. For details click here.

The Concerts at NIME

Performance events at computer-music conclaves are too often deadly boring affairs, as high-brow programmers and theorists show off their latest algorithmic compositions by pushing buttons on laptops and projecting lines of code. But at NIME, since the emphasis was on interactivity, the three evening concerts were anything but boring. By Bill Buxton’s standards regarding the correspondence between gesture and sound, all of the 18 works presented over three nights were at least interesting, and many had the crowd roaring their approval.

The first concert opened with an improvisation by six people, several thousand miles apart from each other. In the recital hall American composer Scott Gresham-Lancaster played a custom MIDI guitar while Japanese-American dancer Tomie Hahn (who had met her partner just the day before) interpreted. Meanwhile, on a video screen was the image of Pauline Oliveros, playing an accordion and shaking shells in a studio on the other side of the continent in Troy, New York, which she shared with dancer Olivia Robinson, who was wearing a decidedly odd-looking headdress made out of huge multicolored rubber strips. On another screen was an image from Marseilles, France: Jean-Marc Montera played a Corsican string instrument called the Cittern and the back side of a stripped-down upright piano, while an unidentified dancer jumped and swooned around the room. Gresahm-Lancaster later complained to me that the alignment of the video screens on the stage was not ideal for the dancers to see, and therefore react to each other, and there was a 300 ms transmission delay among the locations, but it still made for some interesting sound and images.


The musical toys used by Giorgio Magnanensi (below) for Ligature

There was no uncertainty in the next piece, Ligature, by Giorgio Magnanensi, an Italian-born composer who now lives in Vancouver. Seated at the FOH console in the middle of the audience, and looking like a cross between Albert Einstein and Napolean Dynamite, Magnanensi frantically manipulated a dozen modified electronic toys of the old "Speak&Spell" variety, creating a ungodly landscape of screaming, ripping "circuit-bent" sounds which were thrown around the hall by a 16-channel digital matrix mixer, made all the more hideous by the sounds’ roots in (badly-sampled) human voices. The one glitch, when a motorized Furby with a tiny speaker in its stomach stopped moving, was adeptly overcome by the performer who ripped a wire-filled twist-tie in his teeth and shoved it into the critter’s back, bringing it back to life. (He told me later a solder joint had failed, and he knew exactly where it was.) It was a masterful performance, and deserved its enthusiastic ovation.

The pace slowed down for The Appearance of Silence (The Invention of Perpsective) by Laetitia Sonami (born in France, now resident in California), which featured a performance system she helped develop at the Amsterdam research center STEIM called the "Lady’s Glove." Sonami has been working with the system, which contains a formidable variety of sensors allowing control of up to 30 simultaneous musical parameters, for 15 years, and it is now in its fifth version. Her performance, a sinuous, highly expressive mix of concrete and abstract sounds, was a shining example of what can happen when a performer has enough time to become a virtuoso on an instrument, now matter how unusual or complex—something that few of us, especially us adults, find we have the time to do.

Another collaboration between musician and dancer was performed the second night (immediately following McBlare’s brief but memorable set) by Luke DuBois, a faculty member at New York University who is also the author of Cycling ‘74’s Jitter software, and Turkish dancer Beliz Demircioglu, a graduate student at the same school. DuBois held a video camera while Demircioglu athletically moved around the stage, as a stylized image of her was projected on a video screen behind her. The music was created by a motion-capture system, with various musical parameters being controlled in real time by the dancer’s position in space and by specific physical gestures that the system was trained to recognize. It was an evocative performance, and the connection between dancer and music was, even by Buxton’s standards, very clear.

Another winner in the "gesture vs. sound" contest was Yoichi Nagashima of Hamamatsu, Japan (where last year’s NIME was held, and for which he served as chairman), a former designer for Kawai and now a researcher and educator, as well as a composer. His "Wriggle Screamer II" consists of two sensing systems. One looks like an empty picture frame, which is divided up into a grid of 13 vertical and three horizontal optical beams. As the performer put his hands and wrists through the frame, various sounds of harps, bells, strings, percussion, and orchestral hits emerged that reflected whether he was stabbing, poking, caressing, plucking, slapping, or karate-chopping the space. The second system uses muscle (EMG) sensors in his arms and legs to provide more complex control over the sounds, as he wriggled his fingers, rotated his wrists and elbows, and kicked up his heels. "I want...to express the sound that the body grates and the spirit to struggle," he wrote in the program notes for the evening—perhaps not in perfect English, but it certainly got the point across.

Finally, on the third night we heard the simplest, and in some ways the most effective piece of all: boomBox by Jamie Allen, another grad student at New York University. Allen’s only equipment was a featureless injection-molded hard travel case, like the kind you’d take your gear on the road with. But he subjected the thing to tortures you hope your gear never experiences. Inside the case are piezo, tilt, flex, and motion sensors, wired up to processing circuitry (encased in thick foam rubber) and a wireless transmitter. As he punched, kicked, slammed, rolled, flipped, and threw the case down on the ground, agonizing noise samples emerged from the speakers that made you instantly sorry for the poor box. The performance ended, to the great hilarity of the audience, with Allen kicking the thing into the second row of seats as it let out a final, exasperated grunt.

I would be remiss if I left out another NIME event: it wasn’t in a recital hall, but it could certainly fall under the heading of "performance art." After Friday night’s concert the organizers had scheduled a beach party, with drums and a bonfire and the promise of a fine sunset over the Strait of Georgia. Not far from the hall a wooden staircase led precipitously down a heavily-forested cliff to a small sand spit known as "Wreck Beach." Some of the participants descending the 200 or so stairs noticed a sign that said "Beach closed after dark," but said to themselves, "Hey, the organizers must know what they’re doing!" As they arrived, drum circles were forming, the sun was sinking, a bonfire was rising, and cold Canadian beer was emerging from bags of ice.

Not long afterwards, however, two uniformed gentlemen (not police, but Greater Vancouver Regional District Park Rangers, I found out later) appeared and politely told the assemblage to put out the bonfire. Everyone just as politely ignored them, so they tried to snuff out the fire by kicking sand on it, which didn’t work at all. (It should be mentioned that forest fires had been a serious problem in the Vancouver area last year, so the Rangers’ action wasn’t as arbitrary as some might have thought.) A few minutes later, however, a hovercraft appeared on the horizon, its powerful spotlight aimed squarely at the beach, and growing rapidly. When it arrived, its crew politely told partyers to move away from the bonfire, and doused it. I have no doubt it was the first time in history that a party at a computer music conference had to be broken up by the Coast Guard.


Copyright 2006 © Paul D. Lehrman. All rights reserved.