Technology Institute for music educators ti: me course 2a Advanced Sequencing, Second Edition



Download 3.6 Mb.
Page13/27
Date31.03.2018
Size3.6 Mb.
#45342
1   ...   9   10   11   12   13   14   15   16   ...   27

Appendix D: Basic MIDI Concepts





  1. Review the definition of MIDI – MIDI is a serial digital protocol that MIDI-capable devices use to communicate with one another. In the most common usage, MIDI allows a musician to play notes on the keyboard of one MIDI instrument, while another instrument (or instruments) responds to these triggers. MIDI data communicates performance information (note on/off, MIDI pitch, velocity, patch, etc.), not sound; the synthesizer or sound module generates the signal that becomes the sound.




  1. Review MIDI channels – Within MIDI there are sixteen separate channels that act as discrete streams of data. A MIDI device set to listen to a single channel will execute only data on that channel; it will ignore MIDI data on other channels. Note that the concept of MIDI channels is less used today in multitrack, computer-based sequencers that utilize software synthesizers. However, it is still useful for accomplishing specific tasks.




  1. Review the MIDI IN, OUT, and THRU ports – A MIDI device receives MIDI data on its MIDI IN port and sends MIDI data via its MIDI OUT port, while the MIDI THRU port sends out precisely the data received on the MIDI IN port. Note that although MIDI is a one-way protocol, modern DAWs use USB between MIDI devices and a host computer, enabling MIDI’s one-way signals to travel over a single USB cable. Software synthesizers use virtual MIDI ports and cables to connect to host software and to other virtual MIDI devices.




  1. Review a synthesizer’s basic performance parameters which are relevant to MIDI:

    1. Voice – A synthesizer voice consists of all modules necessary to play a single note. This normally includes at least one tunable sound generator (an oscillator or sample) that establishes a basic timbre, plus sound modifiers including a filter and an amplifier, along with envelope generators to control the latter over the duration of a note. Other modifiers may also be included, such as an LFO for pitch modulation, and glide or portamento to make the sound generator slide between pitches.

    2. Polyphony – A synthesizer’s polyphony indicates how many notes can speak simultaneously and is equal to the number of voices with which the synthesizer is equipped. A guitar can be said to be a six-voice polyphonic instrument. Modern software synthesizers often have unlimited polyphony, subject only to the processing power of the computer on which they run.

    3. Multi-timbral – A synthesizer capable of playing multiple sounds of differing timbres (programs) simultaneously is said to be multi-timbral. This is not necessarily related to polyphony; a synthesizer with limited polyphony can still be multi-timbral. Programs are usually assigned within the synthesizer to receive data on different MIDI channels.




  1. Review the most commonly-used MIDI Channel Voice messages:

    1. Note On – This causes a MIDI device to speak. A Note On message consists of the On command plus the MIDI channel number, the Key number (determines pitch; values are C1 to G8, where middle C is C3), and the Velocity (normally determines loudness or brightness; values range from 1-127). Note: MIDI Note Off command is in the specification but seldom used; a Note On with Velocity=0 accomplishes the same thing. Note Off also includes a Key number and a Velocity value; the latter is sometimes visible in sequencers but seldom used.

    2. Pitch bend – This is a continuous controller (CC) that “bends” a note up or down using a wheel with a center detent. The default range is typically +2 to -2 semitones or +12 to -12 semitones. Pitch bend is a 14-bit number for accuracy and smoothness, providing a total of 16,384 steps.

    3. Modulation (CC#1) – This is normally used to add vibrato via an LFO, with a control range from 0 to 127.

    4. Volume (CC#7) – The control range of Volume is 0 to 127. Since Volume is a 7-bit number (a total of 128 steps), it often creates an audible “stepping” sound and is unsuited for real-time volume change.

    5. Pan (CC#10) – Short for “pan pot” or “panoramic potentiometer”, this controller positions a sound within a stereo field (left to right). Values go from 0-127, where 0 is hard left, 64 is center, and 127 is hard right.

    6. Aftertouch – Some keyboards allow the player to apply pressure to fully depressed keys, which generates Aftertouch information ranging from 0-127. This is generally used to increase brightness or add vibrato.

    7. Sustain (CC#64) – This is a switch controller with a value of either 0 (off) or 127 (on).




  1. Review information about General MIDI (GM)

    1. 24-voice polyphony

    2. 16-part multi-timbral capability with dynamic voice allocation

    3. 128 instrumental sounds and one drum kit consisting of 47 individual drum and percussion sounds

      1. MIDI channel 10 is typically reserved for drums and percussion

    4. Standardized program map (see Appendix A)




  1. Understand Standard MIDI File (SMF) format and how to import it

    1. The SMF format allows different sequencing programs on different computer platforms to share MIDI data.

    2. All essential MIDI data are stored, although some proprietary (brand-specific) settings may be lost.

    3. There are three types of SMF files, each of which saves the same data.

      1. Type 0 is the original SMF format, and combines MIDI data from all tracks into a single track with multiple MIDI channels.

      2. Type 1 retains all discrete MIDI tracks that exist and is the preferred format.

      3. Type 2 adds pattern information per track if it exists. The use of this type of SMF is limited primarily to hardware sequencers, and is not recommended.

    4. Most sequencers allow MIDI data on different channels to be split into separate tracks for simplicity.

Appendix E: Rewire and Client Applications



Background: Host and Client Applications
Digital audio workstations (DAWS) send and receive audio data through audio interfaces. For reasons of efficiency and retaining high audio quality, the drivers that manage the relationship between the interface and the computer’s OS do not allow multiple programs to send and receive audio simultaneously (especially DAW applications). At times, there are reasons to use a DAW in conjunction with other audio applications. For that to work, the host DAW application must have complete control of the audio interface and act as the “master” in the relationship. Other applications must route their audio output through the host DAW and act as a client or “slave” program. This master-slave relationship is achieved through Rewire, software that acts as a plug-in in the host DAW. The Rewire plug-in allows MIDI to be routed from the DAW to the client application and then receive audio routed from the client app back to the host DAW.
Why use a client application at all?

The client app has virtual instruments that offer additional resources (Reason)



The client app has functionality that the host app doesn’t have (Melodyne, Ableton Live)
Setup: Pro Tools example


  1. Create a Pro Tools session (Pro Tools must be opened first in order for it to take control of the audio interface. If Reason is opened as a stand-alone application, it will block Pro Tools’ access to the audio driver and interface.).

  2. Create a stereo Instrument track

  3. Insert Reason as a multi-channel instrument plug-in on the instrument track. This will trigger both the Reason application to open and the Rewire plug-in.

  4. Switch to Reason. Click the Create menu and choose a virtual instrument from the menu to create or add the instrument to the Reason session.

  5. Click on the Browse Patch button (or in some cases the Browse sample) and choose a sound (patch, preset, etc.) from the menu to load the sound.

  6. Press the Tab key to flip the Reason rack and plug the instrument’s outputs into the Reason audio interface outputs. The first instrument added or created in Reason will automatically cable to outputs in the Reason software audio interface. Any other instruments will need to be manually routed to the Reason interface.

  7. Switch back to PTs. Set the Rewire plug-in input to correspond to the Reason outputs. Note that Mix L and R correspond to Reason outputs 1 and 2.

  8. Set the Instrument track MIDI output selector to the desired Reason instrument and MIDI channel. After an instrument is created in Reason, it will appear as an available selection on the MIDI output selector of a Pro Tools Instrument or MIDI track.

Example: Instrument track MIDI output selector

MIDI Output Selector button


Current MIDI Output Menu Options




  1. Record enable the track

  2. Make sure that MIDI thru is checked and the fader levels are up.

  3. You should now be able to play and hear the Reason instrument through Pro Tools.

  4. Record MIDI as necessary on the instrument track in Pro Tools.

  5. Note: because Reason is a client application and is not part of Pro Tools, the Reason session is not saved as a part of the Pro Tools session like other plug-ins. Yes, the Rewire plug-in and its settings are saved with the Pro Tools session. The next time the Pro Tools session is opened, the presence of the Rewire plug-in will trigger Reason to open, but it will not automatically recall the Reason session. That must be saved as an independent file. The best policy is to save the Reason session file and place it in the Pro Tools session folder.

Download 3.6 Mb.

Share with your friends:
1   ...   9   10   11   12   13   14   15   16   ...   27




The database is protected by copyright ©ininet.org 2024
send message

    Main page