I’ve been fortunate enough to have some hands-on with the new FMOD Studio Alpha build, and I recently had chance to get in-depth with Raymond Biggs, Lead Tools Developer at Firelight Technologies, about their exciting new toolset.
Firstly, thanks for taking the time out from your busy schedule to chat. Could you give us a brief overview to FMOD Studio and where it sits in the lineage of FMOD.
My pleasure! FMOD Studio is our next-generation sound design, composition and production tool for games, and it’s the successor to FMOD Designer. It’s a completely new tool that draws its inspiration from DAW workflow but is tailored to games. Hopefully this means that if you’ve used a DAW before, you’ll immediately be comfortable working with FMOD Studio.
Reducing the learning curve was one of our major goals for Studio. We wanted newcomers to get up and running quickly, but also make features progress logically so you can easily discover how to do something without resorting to the manual. A major part of this is using familiar terms and concepts.
I can say that already, from playing with the alpha build, the accessibility is totally there, after just playing around for a few minutes you find yourself saying ‘I get it!’ - Where did the fundamental ideas and philosophy behind the concept come from?
Actually the core idea goes back to the inception of FMOD Designer and the early days of FMOD when the company was just Brett Paterson (CEO) and Andrew Scott (Development Manager). FMOD had just released the low-level sound engine and API for games. Back then it was common for programmers to hard code audio file paths directly into the game code. So they wanted to create a high-level “data driven” tool for sound designers - a tool that would let sound guys do their thing without needing programmers involved.
There were a couple of tools available, such as ISACT by Creative Labs and XACT by Microsoft, that improved the asset handling side of things, but Brett and Andrew had specific ideas for creative features - like automation and blending sounds based on game parameters as well as multi-layered sounds. There was a copy of Vegas by Sonic Foundry kicking around the office and they saw the potential of applying a multi-track UI to game sounds. Hence the first few versions of Designer were heavily influenced by the Vegas UI.
That was the core idea - to take the UI of DAWs and applying it to games. However, over the years and through iterations of development Designer strayed further and further from that concept. With FMOD Studio we wanted to return to that original idea and we reworked all the features of Designer from scratch with that philosophy in mind.
I’m very excited by the hardware control surface integration, and having played around with the Alpha build for a while now (with a Mackie Control), I can say it feels very natural and intuitive and honestly makes these tools feel like a huge evolutionary leap from the fiddly, mouse-centric workflow that game audio tools have been hindered by. In fact, it isn’t until you have those hardware controls available that you realize just how fiddly the mouse workflow is to gameaudio. Could you talk about how the integration came about and what you felt it needed to do for the user experience?
Control surface support has been on the cards for a while and we toyed with the idea for Designer. However, making it work nicely with the Designer UI was a big problem. For example, controlling a screen full of text boxes with hardware faders would have been very odd. Ultimately, the UI just wasn’t suited for it. Because Studio’s UI is so closely aligned with DAWs it makes control surface integration very natural.
In fact SSL played a big role in making the integration feel right. We’d been using the SSL Nucleus as our lead hardware surface and they flew one of their guys over to help us make the user side of things as natural and intuitive as possible. Together we looked at how the Nucleus worked with a number of DAWs and decided to align our integration with Logic, because we liked the way Logic interacts with the Nucleus and we thought it would be the best fit for Studio.
In terms of workflow there are obvious benefits to controlling the mixer in Studio with physical faders instead of a mouse. What came as a surprise to me was how nice it was to use a control surface with the multi-track. Being able to physically press transport buttons and move faders to simulate game parameters - it’s hard to describe but it feels like you have direct control of the sound, it’s much more tactile and immediate.
As a consequence, hardware control plays a much bigger role when we’re designing new features for Studio, so much so that for the design of the mixer snapshots UI we focused on how we’d like the control surface to work first.
The game parameter hook-ups make so much sense for testing content and iterating with smooth control over transitions. The mixer window itself is a fundamental new aspect of FMOD Studio, could you talk a little about the kinds of control over the mix we can expect to see (i.e. state based snapshots, side chains, auto ducking etc)
Definitely mixer snapshots will have a big part to play in controlling the mix. Studio will have a priority based snapshot system with per property (i.e. bus volume and effect properties) scoping and blend settings. You’ll also be able to blend snapshots based on game parameters, to blend between different environments, for example. We think we’ve struck the right balance here between flexibility and ease of use.
Also, Studio will have side-chaining built right into the mixer. It’ll work like an insert meter and you’ll be able to control any property, not just volume. Because it’s a feature of the mixer rather than of individual effect modules, you’ll be able to use side-chaining to control any property right across the mixer, whether it be the cut-off of a low-pass effect, a send level, or the level of a VCA.
Could you talk a little about how these mix elements triggered within Studio?
Snapshots will simply be a module in the multi-track and will behave much like a simple sound or a nested event in that you’ll be able to place them on a track, have them triggered by either the timeline or a game parameter, and cross-fade between them. Also, you’ll be able to apply modulators, such as an AHDSR modulator to control fade-in and fade-out.
In terms of FX routing and runtime effects plug-ins inside the deck, can you tell us a little about how all that can be routed as well as the kinds of effects we can expect to be supported.
The routing inside the effects deck is fully flexible. We came up with the idea of placing the fader in the deck itself, so anything to the left of the fader is pre-fader and anything to the right is post-fader, with routing going from left to right. You’re free to simply drag effects around in the deck and place them wherever you like. Sends also appear as effect modules in the deck as well, so you have full control over where they sit in the signal path.
We’ll have a suite of built-in effects for Studio that will cover the basics with a focus on high performance and low memory usage. We’re also working with a number of 3rd parties including iZotope, McDSP, Little Endian to develop a suite of signature effect modules fully integrated with the Studio UI.
Are there any plans you can talk about with regard to supporting overall loudness measurement of the output using R-128 and ITU 1770?
Ultimately we imagine it running as a insert meter on the master bus, but it’s a toss up at the moment whether we develop it in house or work with a 3rd party to develop something more comprehensive. We’d definitely like users to have a bunch of metering options in Studio. By default Studio uses RMS + peak meters across the mixing desk but users have to option of switching to LKFS or QPPM if that’s what they’re used to. There’s no reason why we can’t have plug-ins to add other types of metering as well - it’s something we’re considering.
Could you go into a little more detail about the profiler and how that captures events triggered from the game and then (presumably) lets the user re-play those events and alter the mix parameters…
Essentially we wanted the profiler in Studio to work like a piano roll. The profiler taps directly into the sound engine and records when sounds are triggered, the movement of sounds in the game world, as well as any changes to game parameters. With this data the profiler is able to mimic the behaviour of the game during the recorded sequence. Because the profiler is simply replaying the input to the sound engine, you’re able to make live changes to events and the mixer, and immediately hear the effect of those changes to the output.
Run-time dynamicfiltering, FX processing, and parameter manipulation as part of the overall mix are also significant tools in the creation of compelling game sound. It feels as though these aspects of game sound have been under-developed until now. Do you see the mixing side of game audio getting more deeply embedded earlier in development as the games we make become more challenging?
I definitely see it as a whole new box of creative tools that will be increasingly used right across the board. Why limit these techniques to the main mix? Once the tools are available I think we’ll see them applied at all levels of the mix, right down to within individual sounds. I think the key to dealing with the complexity is to have a well-integrated framework that supports all these techniques and works logically across all levels. This is something we’ve worked hard at with Studio.
I think that’s a really exciting element of these kinds of accessible and flexible mix and sound manipulation tools, and I can’t wait to start to integrate some of these techniques into games much earlier in development. So, I’m sure everyone reading this is wondering when FMOD Studio is going to be hitting the streets, do you have a date in mind at the moment?
We’re aiming for our first public release in August.
Thanks again to Raymond and the whole team over at Firelight Technologies.