Regarding how we do it, I will try to keep the techno-babble to a minimum:
1) General MIDI was originally designed to offer sixteen channels on a single MIDI port. Miracles of economics and technology have made possible the reality of more than one port. In fact, MANY more than one port! The number of synths (ports) is not limited (well it is, but in practical terms, you can have lots of ports, each with 16 channels available).
2) Nomenclature - I've seen descriptions of MIDI files being designated for many more than 16 channels. This essentially means that in order to assign a part to each channel for proper playback, you'll need a port for every 16 channels of data. I use the following Computer/MIDI equipment:
These synths (or ports) are registered in SonarXL on the Primary machine, and are all fed to a Yamaha 01v 24-channel Digital Mixer, then to a WamiRack 24 Digital I/O, to be recorded to hard disk. Additionally, the WamiRack's 4 MIDI ports are driving all external modules, along with an EgoSys MIDITerm 4140, with it's four MIDI OUT ports driving the Secondary PC. The S/PDIF outs for all ports are connected to a Flying Calf DACs, and then via optics out to isolate the systems, to the sound system for monitoring. The editing process takes place with headphones, along with the first mix. Subsequent mixes, down to the final are done with studio monitors, small bookshelf speakers, and with very small "personal stereo" speakers. This helps to balance the mix to sound its best in a variety of listening environments.
Usually I have to know the piece well, and I have to feel comfortable that the original sequencer won't mind if I vary his/her interpretation for the benefit of the music. BTW, The Bartók piece on the MP3 site, marked the first time I'd ever rendered a MIDI file, having never heard a performance of the actual piece.
1) I open the file w/Sonar 1.3, examine how many tracks it uses, and start re-assigning tracks to particular resources, based on the type of resource required. Most of the time, I'll render a piece across all the resources I have at my disposal. This helps spread the load over a wider resource area. And since the Roland is a wavetable-synth, this will allow it more time to create the waveforms for the tracks it's responsible for.
In addition to the aforementioned Gig resources, I use a 115 meg custom SoundFont that I've created from sample CD's by Sonidomedia, EMU, and Sonic Implants. This SoundFont also includes two additional instruments that I created personally - that of a gloriously brash Tam-Tam, coming in at 7-seconds , and an earth-shattering Bass Drum -about 4 seconds.
2) I listen to the file, over and over, learning the sequencers use of tempi, dynamics, phrasing, etc. I use the score to correct wrong notes, and if it's not a work that I sequenced, I beg the original sequencer to let me change catatonic, or rabbit-fast tempos to more moderate interpretations. Controller 7 is set, and changed only when necessary, Controller 11 (Expression) is used for articulation on long or legato notes, unless the samples being used provide their own Expression. After this, I'll vary velocity settings, to provide stronger or softer dynamics when needed, and to vary the timbre from note to note on samples so equipped.
3) When I get to the point where I think the mix sounds good - I listen to playback on speakers, in addition to headphones. This hopefully will help me to counteract the aural bias picked up during the first mix with headphones. I use the an inverted representation of the BiPhonic equalization curve to counteract the effects of headphone mixing (see Headwize.com)
4) I control all orchestral balance from Sonar, with the Mixer providing "line" balance, and use an orchestra layout that is a modified European layout used by Fritz Reiner and the Chicago Symphony of the '50's. I then bounce to a two track (L&R) mix, and then begin the work on the virtual soundstage that the work is being performed on. I use Audio FX-3, from Cakewalk to set up the acoustic environment in which the music will be heard, and then on to the WamiRack Digital I/O, to be recorded to hard disk as a wave file.
6) After all this, again I listen to the piece again on big speakers, little speakers, and cheapie little speakers, as well as my headphones, to hopefully get a good balance between them all.
7) I archive the wave file, but before that, I copy it to an MP3 - 128k-256k/44.1, using MP3 Producer Professional, and post it.
Show me CMRMP3 files