The following article is a transcript from Midi Film Scoring’s video tutorial on how to make MIDI sound real, originally titled “Getting the Most out of Your MIDI Sequencer: Part 1 of 2,” authored by Los Angeles film and television composer Dan Brown. Learn basic routing tasks for virtual instruments, how reverb is used, and how to apply MIDI continuous controllers such as expression and modulation data to make MIDI sound realistic. This video transcript is by Opal Transcription Services. Don’t forget to check out the second half of this tutorial: Make MIDI Sound Real: Creating Orchestral Mockups, Part 2!
Transcript: Much of the music that comes out of Hollywood today, particularly from the television industry, is created digitally, in the box, using pre-recorded or sampled instruments, replacing sweaty and error-prone human musicians with coldly efficient — and, more importantly, cost-effective — robots. For better or worse, more and more final products coming from professional, high-budget productions are being supplemented or even completely replaced with sampled or canned music.
If MIDI sequencing is being done right, it can be very difficult for even professional musicians to tell the difference between a sample library and the real thing. You don’t have to own the most expensive or hip sample library out there to have your MIDI’d music sound eerily similar to a live orchestra if you only know how to properly use the samples you have.
My name is Dan Brown, and I’m a film and television composer working in Los Angeles. In this two-part series on MIDI orchestration, I’m going to teach you some basic and more advanced ways of creating realistic and professional MIDI sequences. This video series assumes that you already have basic knowledge of how to work professional audio recording software and sample libraries.
You should know how to bus audio, and adjust volume and panning levels, and create MIDI, audio, auxiliary, and instrument tracks in both your sequencing software and sample library plugin. The main philosophy behind MIDI orchestration is to place yourself in the performer’s shoes.
Start by first making sure your music is physically capable of being played on what you’re writing for. This is a composition and orchestration problem that can really screw up your final recording before you’ve even started inputting MIDI data.
Our ears have listened to real musicians for years on movie soundtracks and in concert halls, and they have a very clear and accurate aural image of what a real orchestra should sound like. Our ears subconsciously alert our brains to common mistakes novice MIDI composers make. One frequent error is writing brass parts that are both high-pitched and quiet when our ears know that the higher any brass instrument goes, the harder the performer has to blow.
[Trumpet playing a scale]
You can force your MIDI brass to do this, but your ears will realize that something doesn’t sound correct about your high-pitched, soft, and tender brass section.
Another common mistake is writing fast-moving or long, sustained sections for wind players without giving them the chance to breathe.
[Flute playing a scale]
Unlike your MIDI musicians, flesh-and-blood performers occasionally need oxygen to stay conscious. You can fix mistakes like these by giving parts that don’t work on one instrument to another that is more physically suited to it or by trading off musical phrases between instruments to give players time to breathe and relax. These sonic color changes will also make your music more interesting.
Simple writing mistakes start to pile up, and it doesn’t take very many to make the whole recording seem very artificial. If you’re not sure something you’ve written can be played on its in-real-life instrumental counterpart, run it by someone who plays or composes or orchestrates.
Now, let’s assume you’ve done the first step correctly and your music won’t kill any musicians who try to play it. You can now open up your favorite DAW, or digital audio workstation, like Pro Tools, Logic, or Cubase. Any professional audio software that can sequence MIDI, use third-party plugins, and mix and record audio will do.
For this example, I’m going to use Digital Performer. I’ve already written the music I’m going to be sequencing, and we can see on the page here that I’m calling for solo English horn, French horn, bass drum, and a full string section, or Violin I, II, Viola, Cello, and Bass.
For you composers watching this video, not everyone writes their music before sequencing, and some never write it down as notes on a page. That’s up to you as a personal style choice. For the sake of this video, I’ve written down my music here in a transposed score so we can see the written notation and MIDI piano scroll at the same time.
I’ve already written the music and decided the instrumentation, so I’m going to create and name our MIDI tracks. I’m also going to create an instrument track to host my sample library plugin. I’m using the Vienna Symphonic Library plugin, but you can use any library made of live-recorded, non-synthesized samples.
It’s important to create individual audio tracks for each MIDI channel I have here so I can have a control stage for every instrument once the audio is out of the plugin. To do this, I need to assign every instrument channel in Vienna to a different audio output, then go in to Digital Performer’s Instruments window to create audio buses between every output channel in Vienna to my audio inputs in Digital Performer. Your original instrument channel will also need to bus to a separate stereo audio track.
After mapping all the audio inputs and outputs and MIDI channels correctly, I can play with the strings a little bit.
[String section playing]
You can hear they sound good, but not quite real. We’re going to jump in and sequence the strings anyway, so let’s hit record.
[String section playing]
Now we can play back what we just recorded and hear that, yes, all the notes are correct and, yes, everything happens at the right time. So why does it still sound a little off?
I’m going to jump to another session, where I’ve already finished playing in all the parts incorrectly and we can listen to a final product that has been sequenced poorly.
[Ensemble playing]
It’s not terrible, but there’s no fooling anybody into thinking that this is being played by a live ensemble. The biggest problem up front is the reverb — or, rather, lack of reverb.
Reverb, short for reverberation, is the audio you hear bouncing off the surfaces of the room a recording is happening in. It’s often referred to as “room sound.” The Vienna sample library and this vocal recording were both created in rooms specifically designed to not reflect much sound, so they sound like the music is being played in a cardboard box.
[English horn playing]
We’re used to hearing orchestral recordings and performances in huge, beautiful concert halls with great acoustics, so we need to try and replicate that. I’m going to create auxiliary track and insert Digital Performer’s basic reverb plugin onto it. Then I can tell each audio channel we’re using to bus to our reverb aux and decide how much or little reverb per instrument I want.
[English horn playing]
This is part of why we need individual audio channels available for every instrument. If your samples weren’t recorded with reverb on them, then you generally want to use your favorite fancy reverb plugin rather than the default reverb inside the sample library.
If there’s a featured solo instrument, like our English horn line, I want to put a bit less reverb on it to make it sound closer to our ears. There are many choices and techniques for reverb that can greatly improve the quality of your recording, but we’re going to keep it simple for this video.
Our music sounds much better with reverb, but not perfect. We need to be able to play our instruments more naturally in order for them to sound more human.
This is where real MIDI data comes in. Anything you can physically manipulate on a MIDI controller will send a MIDI signal to your computer. We are going to deal with only four of them right now: volume, which is controller 7; expression, which is controller 11; velocity; and modulation, which is controller 1. We can edit these parameters in Vienna’s Advanced window for each Vienna instrument.
First, we need to set the overall volume of our patches. In Vienna, you have a specific control fader, labeled “Volume,” which controls — you guessed it — the volume, which is controlled by MIDI continuous controller 7. Controller 7 (volume) and controller 11 (expression ) do the exact same thing, but have different purposes.
We’re going to treat CC7 like the volume knob on a guitar amplifier: We’ll find an initial level for the entire recording and leave it there. Then, we will use CC11 like the volume knob on the guitar body, adjusting the volume output in real-time using our MIDI controller while we’re recording.
We usually have to change the default volume of CC7 because our patches are all playing back at maximum volume, and the larger your ensemble, the more likely you are to overload the volume gain in your recording system. Because we have a fairly small ensemble for this project, I’m only going to move CC7 down from 127 to 100 on every track except for our solo English horn.
Now we will map this external foot pedal to controller 11, expression, by right clicking on the expression fader in Vienna, then moving our expression pedal. You can map anything on your MIDI controller to any fader function in Vienna by right clicking on the fader, then moving the desired knob on your controller. You can do this for most sample players.
Another important MIDI signal, velocity, is how hard you strike your keys on a scale from 0 to 127. Each sampled note is divided into layers on that scale, and which layer is selected for playback is by default determined by the velocity. These layers consist of different dynamic levels for the same note.
Dynamics are a little different than plain old volume. A better word to use would be “hardness.” Yes, the instruments get louder in volume as you play louder dynamics, but the inherent sound, or “timbre,” of the instrument, also changes, regardless of the volume coming out of your speakers — which, remember, is controlled by our expression pedal right now.
Selecting the dynamic layer by key velocity works great on instruments where the note immediately decays away after you play it, like a piano, but not so much for music that sustains after you hit your keyboard, like our luscious string pad. We need to be able to control the dynamic of our strings while we’re holding down the notes.
We’re going to map our modulation wheel to the velocity crossfade output of the string patch in our plugin. Now we can select our dynamic with the mod wheel while still keeping the final volume coming out of the plugin at whatever level you need it with the expression pedal. Some plugins cop out and combine the expression fader and dynamics crossfader into the same control. More control equals more options, though, so we’re going to stick with the Vienna plugin so I can command both dynamics and expression level independently of each other.
In the Vienna plugin, you can see an empty velocity crossfade box, meaning our instrument is currently set up to allow the key velocity to determine the dynamic of the patch. Check that box to switch dynamic selection from velocity to the mod wheel.
Now you can see I’m using my left hand to control the modulation wheel and my right to play the keys, then the foot pedal for expression control so that I can control both dynamics and expression data simultaneously while playing in the notes.
Doing everything at once requires a bit of practice and coordination, but ultimately sounds much more realistic because you are reacting in real time to the music and other instruments you’ve recorded — exactly the same way live musicians do. If you don’t have an expression pedal handy or your controller or plugin doesn’t support them, map both the dynamics and expression controls to your mod wheel.
To make sure your plugin detects your MIDI data, you need to insert a measure zero into your sequence. This is an empty measure without any music that sits before the first beat of your music. In this bar, record all of your MIDI parameters at the level you want them to start at in bar one so that your plugin doesn’t start playing back your first notes at whatever level you left them at when you finished recording.
That wraps up part one of “Getting the Most out of Your MIDI Sequencer.” If you paid attention, you learned about:
- Considerations to make when composing music for orchestra
- Some basic track setup and routing for MIDI instruments
- What reverb is, and
- How to set and control MIDI data like volume, which is controller 7; expression, which is controller 11; velocity; and modulation, which is controller 1
Featured image by MITO SettembreMusica (CC BY 2.0).
Leonardo says
Thanks a lot for this transcription! Your website is amazing!
Midi Film Scoring says
Thanks for your kind words Leonardo! Glad you’re enjoying the site.