EAMIR Origin
Register

Search Our Site

EAMIR was created in 2007 by V.J. Manzo as a non-profit, open-source, non-commercial project in which anyone could download and develop interactive music systems to help facilitate musicianship. A few years later, a software development kit was built, and book was written to describe ways that non-programmers could easily develop and implement EAMIR applications for their own needs. Today, thanks to the support of the generosity of the EAMIR community, comprising a community of educators, musicians, developers, and researchers, EAMIR remains a completely free, open-source, non-commercial resource that seeks to facilitate musicianship through the accessibility of innovative advancements in interactive technology. EAMIR, the Educational Association for Music Interaction Research, now offers grants and support to further help facilitate musicianship through technology. We are run 100% through donations, grants, and other non-commerical support.

 

The origin of EAMIR is described in the below excerpt from V.J.'s book Max/MSP/Jitter for Music published by Oxford University Press:

 

Creating Tonal Adaptive Instruments

My background is not as a programmer, but as a musician. Before I taught at the university level, I taught K–12 music during the day and attended grad school classes at night. As a beginning teacher, one difficulty I encountered in my classroom was my lack of understanding regarding the diverse levels of musicianship my students would possess. In any given class, I would have students who had had many years of formal training sitting alongside students who had no formal music training at all. How could I address the needs of the advanced students without leaving the musically untrained students in the dark?

Another difficulty I encountered was the mainstreaming of students with disabilities, mental or physical, in my classes. How were these students going to play a scale on the keyboard, let alone understand one, if they had difficulty hitting the glockenspiel keys with a mallet? These concerns, I soon discovered, were common concerns among educators. I began to look toward technology for assistance.

It was during this time that I first thought of creating some adaptive instruments that would help bridge the gaps among the musicianship levels of my students. I wanted to separate the cognitive aspects of music-making from the physical actions of music-making using technology to facilitate the latter. There were musical concepts like harmony, scales, and dynamics that I wanted to teach that I might never cover to the degree that I wanted if I continued to spend most of my class time explaining proper glockenspiel grip and technique.

EAMIR

The first adaptive instrument I used with my students was created in Max and involved some objects to track the location of a specific color from a video camera and generate numbers for the position of the color on the screen. The position of the color from left to right generated the numbers 0–127 from low to high, while the position of the color from top to bottom generated the numbers 0–127 from high to low. To produce a consistent color, I used a laser pointing device.

We will go into more depth about this patch in Chapter 16 when we discuss working with live video. In essence, the patch was largely based on taking the numbers 0–127 from the horizontal location of a moving color tracked from the camera and represented in Max, and filtering those numbers to diatonic scale degrees using the coll process outlined earlier; the patch mapped the numbers from the vertical location to determine velocity values.

In my music classroom, I set up the software on a computer and connected a webcam. I had a student hold a laser pointing device in front of the webcam. As she waved the laser from side to side, the software tracked the color of the laser and played through the pitches of the C Major scale from low to high at different velocities. The student needed only to hold the pointing device for the software to track the color from the laser pointer and create diatonic pitches.

Soon I had some of the more musically proficient students providing an accompaniment on traditional instruments like drums, bass, and guitar in the same key as the “laser” patch. At that point, I was able to discuss concepts of high to low pitch as well as varying dynamics. To make the situation better, I had my students demonstrate knowledge of these concepts by performing them: the more advanced students with their traditional instruments, and the untrained and special needs students with this accessible software instrument.

A few weeks later, I designed a few more patches based on the same pitch filtering concepts only using different control devices. I started working with what I had around me by using the computer’s mouse and keyboard to trigger notes. Next, I started using a touch-screen computer to trigger notes; then, graphics tablets (borrowed from our art department) and Smartboards. For each new patch, I was able to reuse almost all of the Max code except the parts that received numbers from the different control devices; each patch basically accomplished the same thing using a different control device.

The idea of making tonal music from lots of different devices was intriguing to me. I began buying sensors (such as those from I-CubeX)3 and putting them all around my classroom. I put pressure sensors on the walls, doors, and floors; light sensors by the windows; anything that would give me some numbers that I could map to music. I started referring to my classroom as the Electro-acoustic Musically Interactive Room, or EAMIR, for short.

Soon afterward, I built these Max patches as standalone apps for Mac and Windows, something we’ll cover in Chapter 11, and put them on the Internet at www.eamir.org so my students could freely download them. Not only did the students download them, but many of their parents did as well. Using these instruments was a great way to supplement teaching the musical objectives I was trying to convey to my students, and the novelty of using technology in this way got their parents involved in the process, and advocating for our music program.

Control Interfaces

A variable is something that changes. A control is something that changes a variable. In music, there are many variables, such as pitch, dynamics, and timbre that change as a result of the instrument’s control device, also known as a control interface.

The control interface for a violin is typically a bow. Without buttons, knobs, or sensors, the bow is capable of controlling numerous variables within a single, simple, interface. For example, if you angle the bow differently as it hits the strings, the timbre will change; apply more pressure and the dynamics will change.

The Buchla 200e (see Figure A) is a modular synthesizer also capable of controlling numerous musical variables. In fact, the Buchla is capable of creating more diverse timbres than the violin. However, controlling musical variables on the Buchla, with a control interface of knobs, buttons, and patch cables, involves more gestures than the violinist and the bow. Accessibility is the issue!

FIGURE A

the Buchla 200e created by Donald Buchla (Tiemann, 2006)

For the intent of performance, some control interfaces are more accessible than others for real-time use. With a computer, you can arguably achieve any sound imaginable if you tweak the right numbers and press the right buttons. It is a well-designed control interface, however, that allows a performer to readily control musical variables in a less cumbersome way than clicking on menu items from pull-down lists and checking boxes.

Throughout history, people have created new musical instruments, and the instruments created generally reflect the technological resources available at the time. Early primitive instruments had few moving parts, if any. The Industrial Revolution made way for the modern piano to evolve using steel and iron. In the Information Age, it stands to reason that newly created instruments may largely involve computers and electronics.

New Interfaces for Musical Expression (NIME)4 is an international conference in which researchers and musicians share their knowledge of new instruments and interface design. Session topics include controllers for performers of any skill level as well as the pedagogical implications of using these controllers.

Tod Machover, Professor of Music and Media at MIT, and inventor of Hyperinstruments5, among other projects, shared an interesting thought: “Traditional instruments are hard to play. It takes a long time to [acquire] physical skills which aren’t necessarily the essential qualities of making music. It takes years just to get good tone quality on a violin or to play in tune. If we could find a way to allow people to spend the same amount of concentration and effort on listening and thinking and evaluating the difference between things and thinking about how to communicate musical ideas to somebody else, how to make music with somebody else, it would be a great advantage. Not only would the general level of musical creativity go up, but you’d have a much more aware, educated, sensitive, listening, and participatory public” (1999).

With proper practice, an individual can control most variables of an instrument well and at very fast speeds. However, the initial performance accessibility of an instrument or control interface has definite implications for its use by individuals as a musical instrument—in particular, those individuals who lack formal musical training and those who have physical or mental impairments.

Videogame controllers are typically designed with mass accessibility in mind. Many early videogames used joysticks and only a few buttons to control game play. Game controllers today are typically comfortable to hold with buttons and other switches positioned to allow the user to access them easily. Some developers of controllers such as Nintendo have incorporated sophisticated sensors and gyroscope technology into their controllers to provide continuous data about the controller in addition to using buttons and switches. A few EEG-based game controllers exist that aren’t held but worn on the head and controlled by measuring brain waves.

After I had created a few EAMIR patches, I began using the same basic program functionality with different types of videogame controllers. In one lesson, one of my teaching objectives was to discuss things about functional harmony and how certain diatonic chord progressions functioned.

In a classroom of individuals with mixed levels of musicianship, I didn’t want to spend the majority of my time focusing on how to play each chord in the progression on their instrument—that was something they could work on with their instrumental teacher, and some students didn’t have an instrument. I wanted to discuss the chord itself and how it functioned among other chords in a given key. I wanted them to hear and experience the concept I was trying to explain and I didn’t want them to miss it because of their lack of ability to demonstrate the concept themselves on a traditional instrument.

At that time, the game Guitar Hero was becoming extremely popular. All of my students played the game and were familiar with its unique guitar-shaped controller. Naturally, I felt that using this controller to trigger chord functions would provide some of my students with an accessible control interface to perform with that they already knew how to use while allowing me to discuss my teaching objectives: chord functions. Plus, it’s really cool looking!

In Max, I made a patch that mapped each of the first 4 buttons, in combination with the two-position flipper, to the 8 chord functions of a given key. Button one pressed down with the flipper held down triggered the one chord, button one held down with the flipper held up triggered the two chord, and so on. The fifth button, in combination with the flipper, triggered one of two chord voicings: one that played a full root position “rhythm” triad and another that played only the single root note of each chord in a higher “lead” octave. The controller then had the ability to play chord functions and scales in any diatonic key starting on any tonic.

Since the controller was already familiar to the students, there was little instruction needed to explain how the instrument worked, and we were soon able to discuss chord progressions in terms of chord functions. Imagine my surprise when one of my students who couldn’t (or wouldn’t?) play a glockenspiel was able to tell me that he preferred the sound of a “I V vi IV progression in E Major” to other progressions. Others enjoyed hearing how chord functions differed in more distant tonalities like Harmonic Minor and Lydian b7. I even made some specialized notation for the activity using the “colored note-heads” feature in the notation software we used in which the arrow markings indicate which position to “strum” with the flipper.

FIGURE B

Guitar EAMIR-O notation with colored noteheads and flipper position markings

Students soon began bringing in other game controllers and interfaces to see if they could make music with those as well. All the while, the patch I modified to suit these different controllers remained the same in most ways. The only part that needed to be rewritten was the part of the patch that mapped data from the buttons of whatever game controller I was using to the buttons in my patch that triggered chord playback.

Anything that Max can get data from is fair game for use as a control device. There are even applications for mobile devices that can send data to Max with relative ease. You can also use mobile apps to control your computer’s mouse and keyboard, thereby sending data to Max. Wireless mice, keyboards, and numeric keypads can be used as inexpensive controllers or modified to become footswitches. In the next chapter, we will discuss how to get data from various controllers and sources. Remember: it’s cool to say that you got your phone or your joystick to communicate with a Max patch, but it’s all about how you map that data that really matters.


Join the EAMIR Community today!