Introduction to MIDI and Computer Music: The MIDI Standard
MIDI is an acronym that stands for Musical Instrument Digital Interface. It’s a way to connect devices that make and control sound — such as synthesizers, samplers, and computers — so that they can communicate with each other, using MIDI messages. This lets one keyboard trigger sounds on another synthesizer, and it makes it possible to record music in a form that allows for easy note editing, flexible orchestration, and song arrangement. Virtual instruments — computer programs that simulate hardware synthesizers and samplers — also communicate with computer sequencing software running on the same computer using MIDI messages.
This web page introduces you to the basics of MIDI. The interactive application you can download from the syllabus helps you to understand some specifics of MIDI messages.
MIDI evolved as a standard to enable communication between the more compact and affordable synthesizers that were available in the early 1980s, after the era of large, expensive modular analog synthesizers. MIDI was meant to allow someone to control multiple synthesizers from a single keyboard, so as to generate, for example, the massive layered sounds popular in some ’80s pop music. Formerly, such connections between instruments were not standardized, so incompatibilities were common. The MIDI standard was completed in 1983 by a consortium of musical equipment manufacturers (including Korg, Oberheim, Roland, Sequential Circuits, and Yamaha). Products featuring the standard, such as the popular Yamaha DX7, were on the market soon after.
Before long, sequencing software for personal computers could take advantage of the MIDI communications protocol to let users record, store, and edit music, as well as manage large collections of synthesizer sounds.
The most important thing to understand about MIDI is that it is based on the idea of message-passing between devices (pieces of equipment or software). Imagine a common situation: you have a keyboard synthesizer and would like to record a sequence using the sounds that are in that synthesizer. You connect the computer and synthesizer so that they can communicate using the MIDI protocol, and start recording. What happens?
When you play notes on the synthesizer, all your physical actions (except the dance moves) are transmitted as MIDI messages to the computer sequencing software, which records the messages. MIDI messages are brief numeric descriptions of an action. Keys you press, knobs you turn, the joystick you wiggle — all these actions are encoded as MIDI messages. You hear the sound you’re making, but that sound comes out of the synthesizer, directly to your speakers. The computer does not record the sound itself.
When you play your recorded sequence, the computer sends MIDI messages back to the synthesizer, which interprets them and creates audio in response. Because the music handled by the computer is in the form of encoded messages, rather than acoustic waveforms, it’s possible to change the sound of a track from a piano to a guitar after having recorded the track. That would not be possible if you were recording the sound that the synthesizer makes.
The concept of channels is central to how most MIDI messages work. A channel is an independent path over which messages travel to their destination. There are 16 channels per MIDI device. A track in your sequencer program plays one instrument over a single channel. The MIDI messages in the track find their way to the instrument over that channel.
MIDI channels are a bit like channels on your TV set: each channel is independent of the others, and, on some models of TV, can even be watched simultaneously in separate boxes that appear on the screen. Just imagine that instead of a TV show, each channel features a single instrumental part — with notes, pitch bend, and other nuances acting independently of the parts on other channels that are playing at the same time.
Each channel (marked “Ch”) carries its own instrumental part, and has independent volume, panning, and other settings.
Present day software is capable of performing the sound-making function formerly available only in external hardware-based synthesizers. It’s just as likely now to see, connected to a computer, a keyboard that can’t make any sound at all. Its function is to trigger and control, via MIDI messages, sounds made by the computer. But the sound-making part of the computer software still communicates with the sequencing part using the MIDI protocol.
There are still plenty of MIDI setups that work in the traditional way, with the computer just recording and playing MIDI messages, and the sound created by an external synthesizer. These are especially useful in live setups, where the reliability and faster response of hardware synthesizers are distinct advantages. In such a system, you use MIDI cables to connect the synthesizer to a MIDI interface, which then connects to the computer with the same sort of USB cable you use to connect a printer. MIDI cables are unidirectional — they transport messages in only one direction. So you need two MIDI cables. USB is bidirectional. The sound made by the synthesizer goes to a mixer, which then feeds an amplifier and speakers (not shown below).
MIDI ports on the interface and synthesizer are labeled IN and OUT. You connect the MIDI OUT jack of the synthesizer to the MIDI IN jack of the interface, and vice versa.
What if you have more than one external synthesizer? Your MIDI interface might have more than one set of IN and OUT ports. Then you can connect your two synthesizers separately. But if you have a single-port interface, you must make use of the THRU port found on many synthesizers to create a “daisy-chain” (series) connection of devices.
When the computer plays a sequence, the MIDI messages go first to the keyboard synthesizer, which makes sound in response. The keyboard sends a copy of the incoming messages out its THRU port, and these enter the drum machine on its IN port. The same thing happens again between the drum machine and the rack-mount (i.e., no keyboard) synthesizer, which is the end of the chain.
This is a handy way to connect devices, but it has one big problem: all the devices must share the same 16 MIDI channels. That might not be enough channels to construct a full arrangement of a song with many different sounds. The main problem, though, is that you would have to make the various devices ignore the channels you don’t want them to respond to, which requires a lot of configuration that you probably don’t want to bother with in the heat of creation.
The solution to this problem is to get a multi-port MIDI interface, such as the one below. It has 8 independent sets of IN/OUT ports, each of which can carry 16 channels, for a total of 128 channels.
Best of all, when you hook devices up to this interface, any device can control any other. For example, the keyboard controller could play sounds on the drum machine or the rack-mount synthesizer. The MIDI guitar controller could make sounds on all the other devices. The routing would usually be configured in your sequencer software.
For simpler setups, it’s more common today to find keyboards with a USB port that allows for direct connection to a computer, bypassing the MIDI interface. The keyboard in the picture below has both USB (circled) and traditional MIDI ports (to the right).
As mentioned above, a lot of the action formerly taking place in external boxes is now happening in the computer, obviating the need for complex hardware setups. For many situations, all you need is an inexpensive MIDI controller keyboard (without internal sounds), with a USB connection to the computer.
Synthesizers and samplers have large numbers of sounds (which we call patches or programs). The patches appear in banks of 128 or fewer, and your computer software selects the patches by number, even if you choose the patches from a list of names and never notice the patch numbers. Types of sounds — pianos, guitars, violins — are assigned to numbers in a way that is not compatible between different synthesizers. That means that a sequence recorded using one type of synthesizer will not sound remotely the same when played using a different type of synthesizer.
To address this problem, the MIDI standard includes the General MIDI (or GM) specification. The most important part of this is a standard assignment of instrument types to patch numbers. For example, in a General MIDI compatible sequence, a violin sound will always be patch number 41. The violins on two different keyboards will not sound exactly the same, but at least they will sound like violins.
A similar problem affects drum kit patches: the assignment of individual drum sounds to keys on the keyboard is not guaranteed to be compatible between different synthesizers. General MIDI specifies a map of typical drum sounds to keys. It also declares that channel 10 is the drum channel, so that a sequence can depend on finding drum sounds there.
For the names of patches and drum sounds, and their assignments to patch numbers and keys, see the General MIDI Instrument Patch Map and Percussion Key Map.
To enhance compatibility between different MIDI sequencing and music notation programs, even those running on different operating systems, the MIDI standard defines a specification for the Standard MIDI File. This type of file (usually having the file extension “.mid”) represents multi-track sequences, complete with patch selections, notes, pitch bend, and other controls. A wide variety of programs can read and write SMF files. The format is especially useful in conjunction with the GM patch set, to enhance portability between different systems.