Apple MainStage — Patching for Performance

Roger Foxcroft
13 min readFeb 20, 2021

Apple’s MainStage product provides an elegant and sophisticated way of mapping external MIDI controllers to software synths, virtual instruments and so on. For an introduction to this please see my other blog. This article talks about a number of things you could (and should) do to ensure your live performance happens reliably, and that your live rig remains performant and fit for purpose.

The main fundamental difference between MainStage and a DAW like Logic Pro is it’s intended use in live performance. This means low latency is essential. While church organists are quite happy with a huge delay between hitting a key and hearing the sound your average pit musician is not, and certainly a more responsive keyboard is important when playing with an ensemble.

How do I know if I have problems?

In short — you’ll know! The first indicator will be sluggish performance from the keyboard, and sometimes everything can be fine until you add just one more plugin and BAM! 1/4 second delay on the keyboard. Other problems that can occur that are harder to diagnose include:

  • breaking up of sound, or even missing sounds on some keys
  • sluggish to change patches
  • held notes
  • notes cutting off before the tail has ended

Hardware

The first thing to consider is the basics of how computers work. Unless you’re controlling external synth hardware (which you can in MainStage) you will need some CPU grunt. Depending on the virtual instruments you are using the amount of CPU grunt you need will change, so some research into the best plugins for live use is a good idea.

If you are careful you can get away with a fairly modest CPU, such as the i5 in the older Mac Mini, but you do need to make sure you have enough RAM to hold samples. MainStage loads samples into memory for a fast response time, so a full theatre concert file can run at 4–8Gb RAM.

The CPU does maths and moves things around in RAM — so you need enough RAM to hold all that, and enough CPU to do all the maths. The less maths and less moving around the better, of course, and it’s not always obvious what counts as “all the maths”. For instance, Modartt’s Pianoteq is modelled, not sampled, and you’d think it was quite CPU heavy but for some reason it’s not. Whereas a really good convolution reverb will definitely give your CPU a workout.

One other important point here — as well as generating audio output from this maths it is the CPU’s job to move data around to work on it. The number one reason for apparently poor CPU performance is actually too little RAM. When your computer runs low on RAM the Operating System (OSX in this case) starts to do some actually pretty clever stuff to free up memory. You can see this in Activity Monitor.

Typical view of memory pressure from OSX’ Activity Monitor tool

Here we can see that 2.84Gb of RAM is “compressed”. This means that OSX thinks I’ve not used it for a while, but the application that created it is still running. So it compresses it, and if the application comes back and needs it again it will uncompress it.

In general this is fine, and you don’t even know it’s happening. But on a theatre rig that is doing nothing else you really don’t want OSX to be doing this, so it’s essential you have enough RAM for the things you’re doing, or you’ll be losing CPU cycles to compressing / uncompressing data rather than processing it.

I would recommend a mid to upper range processor and at least 8Gb RAM. My theatre rig was, for years, an i5 Mac Mini with 8Gb RAM, only recently upgraded to an i7 Mac Mini with 16Gb RAM due to a couple of shows that had massive concert files.

Disclaimer — I am still on Intel Mac and haven’t yet moved to the new generation of Apple Silicon, where higher performance with less RAM may reduce the hardware requirement.

Most importantly is whether you’re using a laptop, a mac mini or a fully loaded Mac Pro you need an external sound card. The internal sound cards are noisy, and you really don’t want your WhatsApp notification sound played out over the theatre FoH speakers because you forgot to mute them!

The cheapest way of doing this effectively is something like the Behringer UCA222, which is around £20 from a number of retailers. It has RCA phono outs, which any decent sound technician should be able to work with. Or, if you have slightly more budget and want something with balanced outputs I’d recommend the Scarlett Solo from Focusrite, which is more like the £110 mark.

External sound hardware takes some of the mixing job and reduces latency, improving sound quality. So if you invest in only one piece of equipment as part of this make it one of these! If you need MIDI you may need to look around a bit for something that meets your needs slightly better, but the Focusrite range is excellent quality and well priced.

Plugins

Before we get into specific things you can do to improve performance in MainStage let me first talk about 3rd party plugins.

Once you have enough memory to hold your samples in RAM the next big deal is how much CPU processing do you need. Some plugins require more than others, and this is largely down to the sophistication of those plugins. For instance, Spitfire’s BBC Symphony Orchestra is sublime and I love it for my composition projects. But there’s no way I would use it live in the theatre. The plugin is incredibly sophisticated, analysing playing styles to dynamically adjust the samples being used (of which there are millions) to give as much realism as possible.

However, in the theatre you probably can’t afford the complexity of these plugins.

In my experience the MainStage and Logic Pro stock sounds all perform excellently, as does Garritan Personal Orchestra, Modartt’s Pianoteq and Native Instruments Kontakt Library. However, for live use you need to be careful of using things like Native Instruments’ Session Strings Pro or East West Hollywood Strings. These are CPU hungry plugins, and that can lead to big problems.

Also, the Alchemy virtual instrument is famously CPU intensive.

Additionally, be careful of effect plugins. Apple’s Space Designer is an excellent reverb plugin, but by default is a convolution reverb rather than an algorithmic reverb, and they take more CPU cycles, thereby introducing more latency. The number one reason for latency in projects I’ve fixed is overuse of Space Designer.

Measuring your CPU and RAM usage

Display Preferences panel in MainStage.

In order to help us musicians Apple provided a CPU and RAM usage view in Mainstage. To enable this you may need to go to MainStage preferences, and enable it as shown here, by clicking Show toolbar CPU and memory meters.

Typical view of CPU and RAM meters in MainStage.

This will add CPU and MEMORY sections to the existing MIDI IN monitor in the top bar, and will help us to examine and diagnose problems.

By double clicking each of these we can open a sub window, which tracks in real time the use of CPU and RAM in our concert. Usefully these vertical stacking bars are also colour coded to help us ascertain any hogs on either resource. This screenshot is taken from The Full Monty, and you can see it is fairly modest, with Pianoteq taking the lions share of RAM, with Kontakt following on. CPU is barely tracking anything, but if I play on the keyboard this would go up, so it’s always good to play test your setup.

MainStage settings

It is worth at this point spending a couple of minutes talking about the system settings for MainStage.

Under settings there is an Audio tab, and this is where your audio hardware is configured. You can see that I have my MOTO 828ES (in the studio — this is not my live rig!) running at 44.1KHz.

I normally work in the studio at 96KHz, but for theatre work, this is simply overkill. I work at 48KHz, but here’s the deal — a higher frequency means a faster response time, but more CPU load. So find a setting that seems to work for you and your hardware, though for most purposes 44.1KHz and 48KHz will be indistinguishable to your theatre’s front of house crew in terms of sound.

If you click Advanced Settings you can fine tune the latency a little bit more.

The I/O Buffer Size is the current work the CPU is doing between getting stuff from RAM, and pushing the audio data to your soundcard. In order to make a sound the CPU has to go fetch some samples, do some work on them, then push those samples out the other side. If this buffer is too large then the latency is too high, as the whole buffer is dealt with before it’s sent to the sound card. If the buffer is too small then the CPU melts down trying to keep up. If you can achieve 128 samples then great, but actually 256 is still pretty playable.

Here’s some numbers — in the example above a 128 sample buffer leads to an output delay of 6.2ms. If I change that to 256 samples the delay doubles to 12ms. If I go back and up the sample rate from 44.1KHz to 48KHz this delay comes down to 11ms.

Simple as that. If you’re playing washy synth pads and so on you can get away with quite a lot of buffer, but if you’re playing piano you really want something quicker, but tune to taste. I have had some gigs where I was really stretching my Mac mini, and at band call upped the buffer to 512. It’s the thing that makes the biggest difference. Ideally, if I’d had time I’d have gone through and removed some high CPU samples, but I didn’t so I just lived with it.

If you can’t seem to do anything without sounds breaking up, crackling and other nonsense first drop in here and double up that buffer to see if it helps your problem.

MainStage Programming Optimizations

There are some very simple rules to creating MainStage concerts which will definitely pay dividends in terms of CPU and RAM usage, and if you know about these before you start then you won’t be cursing later on, as they’re definitely better to know about before you go and create that 350 patch Legally Blonde concert file only to realise it doesn’t work on your theatre rig!

(Oh yeah — always test it fully on your theatre rig if you’re programming it elsewhere!)

Aliases

First up, and most obvious is the use of channel strip aliases. There are two really good reasons for using aliases. Firstly, many of the settings you apply to one patch will apply to all, and secondly, they all share the same instances of plugins in memory.

Let me explain.

Aliasing master sounds in a separate patch

This screenshot is the SOUNDS patch from my Full Monty concert (great show, if you ever have the chance to play for it). I originally saw others such as the legendary Brian Li arranging their concerts this way, and it made sense. For each distinct sound in your concert you really just want one channel strip. So take the Marimba in the above concert. It has the Sampler instrument (used to be called EXS24, and is used to play all MainStage stock sounds), as well as EQ, Chorus and Tape Delay plugins.

The marimba is used 13 times in this concert file, and while I’m not sure how much memory or CPU is used by this patch specifically I certainly don’t want that 13 times! All plugins take up RAM, and some will take up CPU even when not being used, so we need to reuse this marimba without copying and pasting it.

The answer is aliases. The green arrow in a circle at the top of each of these channel strips indicates that the channel has been aliased. If I click this arrow it takes me to the first aliased use.

Use of an aliased marimba patch, in my Full Monty concert

The green arrow now doesn’t have that circle underneath it, and we can see this patch is aliased. It’s effectively the same channel strip. Any plugin settings, fader level, sends, pans and everything else will be the same for all aliases of this patch — which is probably fine, because if I want to tweak the EQ of my marimba I probably want that to be the same everywhere.

Creating an alias is as easy as copy and paste, but instead of selecting paste you select Paste as Alias from the Edit menu.

If you DO want different plugin settings, that’s a different patch, and actually you can convert an alias to original patch using the cog icon at the top right of the Channel Strips panel, should you need to.

I arrange all my source channels in separate patches at the end of the concert for convenience, and to make it easy to find the originals, but you don’t need to. Alternatively you could have the first use of a channel strip be the original, and then use aliases from there, but if you need to delete that first channel strip you have a problem as this will delete all the aliases. Fortunately MainStage will warn you about this, but it’s still better to have them separate.

Last word on aliases — channel strips are aliases, but the mappings for that channel strip are NOT. They are still at the patch level. So, for instance, if you have that marimba on only part of the keyboard you can change that for each time it’s used, along with transposition and what the on-screen controls do to that channel strip. This can actually be a little tiresome sometimes, as if you want to make a global change to what a fader or pot on your keyboard does you have to make it to each aliased channel strip separately, as that mapping is in the patch not the channel strip. Make sense?

The easiest way to make changes like this is to simply recopy the alias from the corrected alias. You can copy and paste an alias with impunity, and it will still be an alias of the same original you created at the beginning.

Effects at Concert Level

Some effects — particularly convolution reverbs like Space Designer — take a load of CPU. If you can avoid it and use algorithmic reverb then I’d recommend that, but in general all reverbs should be at concert level. This is also how the standard templates from Apple are arranged. I would also recommend removing reverbs from concert level that you are not using, just to free up RAM.

Use the CPU and RAM meters to see the effect that your reverbs are having in your concert. You’ll be amazed!

Minimize Bussing

In general bussing feels like a good idea. In the studio we use millions of them to group instruments, to send to effects, to pull out separate parts of the mix in mastering — they’re essential. The problem is they are quite a CPU drain.

The most flexible way of adding reverb is to add it to each channel strip, but that’s deadly and should be avoided as described earlier. Ok, then says you — I’ll create reverb at concert level and send to it from my channel strip.

This is better, but sending to busses it quite CPU intensive. In a theatre environment, even though you might have tens (or hundreds) of different sounds you’re not playing them all at once, so you don’t need to mix them. You just need to check overall balance is about right as part of your ensemble, so we may not need bussing, especially if our system is struggling with it a bit.

If we’re just looking for some warmth across the board we might get away with an inline reverb on the Output 1–2 channel strip at concert level. That’s the least flexible option, but it’s a start.

I have a Focusrite 18i20 in my theatre rig which has discrete built in mixers, so I often use different outputs from MainStage if I need different reverb levels, and have my Focusrite do the heavy lifting of mixing these to the same physical outputs on the back of the rig, so I get my flexibility that way, but this may not work for you. I might have Output 1–2 with a general ambience on it, but then use Output 3–4 for something I needed completely dry, which my sound card puts out the same outputs to the FoH engineer in the theatre.

In general, though, as my dad used to say, “jettison the superfluous” — if you don’t need the flexibility don’t add it in. It just takes CPU cycles, and because Mainstage lets you apply a send to all channels in one hit it’s easy to fix later on if you do need to change something.

As a side note, if your plugins also have reverb you may want to remove that too, and instead run everything through your system wide reverb. It depends a bit. The default reverb on Pianoteq just gives the piano some ambience, and the piano doesn’t sound quite right without it, but you’ll have to play with your sounds and see what works.

Summary

There’s a massive difference in MainStage that can be made from using it efficiently, but at the end of the day you’re still asking a computer to do some fairly hefty work, so sometimes the answer will just be to upgrade. But if you use a decent external sound card, make efficient use of memory and CPU in your concert files, use aliasing, and sensible sound settings you’ll get the best you can out of your rig.

One final word — in the theatre you usually have an interval, so if the problem is solved by splitting the concert in two, then go for it. I always create one concert file first, as I’m still working out sounds and so on, and then save Act 1 and Act 2 separately. This is really easy if you used aliases, as all the original channel strips are at the end of the concert. So each concert gets it’s own set. As a further optimisation, once you’ve removed Act 2 from your Act 1 file you may find there are some channel strips that are now not used at all, and can be deleted.

I hope my experience and the advice here is useful.

--

--

Roger Foxcroft

I've been into music and computing in equal measure my whole life, and now compose, arrange and perform music semi-professionally.