Here is an awesome set of tips for anyone starting to implement using the new Wwise HDR mixing system, from our good friend Rev Dr Brad Meyer.
Here is an awesome set of tips for anyone starting to implement using the new Wwise HDR mixing system, from our good friend Rev Dr Brad Meyer.
The mix panel from GDC 2013 this year is now online over in the GDC Vault. (Simon Ashby, Rob Bridgett, Garry Taylor, Kris Mellroth)
Lots of other loudness & mix related talks going on this year too, I also highly recommend checking out the following…
Game Loudness, Industry Standards
Also waiting for the Dead Space 3 Interactive Mixing talk to go live in the vault which is well worth checking out for some proprietary ‘atomic snapshot-based lua goodness’.
The nominees for Best Audio Mix at the 11th annual GANG awards have just been announced. This is only the second year that the best mix category has been running, and last year the award was taken by Battlefield 3 within a category jam-packed with awesome examples of superbly mixed games.
This year is no exception, and there are some incredible titles here…
Far Cry 3
The Darkness II
On a personal note, It is an honest surprise to find Radical’s own Prototype 2 in there, and it certainly brings with it a feeling of pride and sadness in that the team who put together that game are now all separated to the various corners of the world. Particular mention has to go to the sound and technology teams behind the game but also those who commissioned, designed and built the in-house mix room facility at Radical, including the late John Vrtacic, who sadly passed away in 2009 after completion of the studio, in which the game was mixed. If you are a friend of sound, you know who you are!
Congratulations to all the nominees this year!
You can see the full list of GANG award nominees here and the awards show takes place on Thursday, March 28th, at the Game Developer’s Conference in San Francisco.
… and the winner is…
FAR CRY 3!
Very well deserved! Congrats to the whole team at Ubi ! The game sounds incredible and the mix is excellent.
I have been involved recently in curating a couple of really exciting panels that have their focus squarely on mixing challenges specific to video game production, while at the same time comparing techniques with wider production practices in the world of film post-production mixing. There is something for everyone here, no matter what side of the Atlantic you happen to be on…
The first, is at AES next week (6th - 8th February, London, UK), and promises lots of very new and exciting info from both production and technical vantage points.
Description: Mixing in video games is a huge area for potential discussion. It is also an increasingly important topic in the interactive audio landscape and is gaining much wider attention in the field. This panel has been assembled to take a step back and assess the field of game audio mixing in some new contexts, examining some of the many facets of the mix from style, philosophy, and approach, to technology, loudness, planning and implementation.
In this moderated panel discussion, several of the leading practitioners and technologists in the field of interactive mixing come together to discuss the emerging theoretical, artistic and technical frameworks for game mixing over the next few years.
The second, at GDC (Thursday, March 28th - 10am) at the Moscone Centre in San Francisco, will pickup the debate again on differences in production between films and games mixes, and will attempt to figure out how deep the mix really goes in terms of storytelling and overall direction. The panel will also be exploring some of the various production challenges and politics involved in both film and game mixes.
DESCRIPTION: Last year was the first year that G.A.N.G. had a category for Best Mix in a video game. Not only is the technology becoming more refined, as both games and the audience become more refined and fragmented, but the quality and challenges of game mixes are also increasing. This panel will consist of an entertaining and lively discussion across a variety of mix-related topics, such as loudness, dynamic range, interactive mix tools & technology, post-production planning, budgeting, mix craft, storytelling, and aesthetics.
It seems we are getting treated and indulged with lots of in-tool loudness and metering tools these days, making it much easier to keep our content consistent and under control, without the need for the additional complexities of external metering solutions. The recent Nuendo 6 updates (see my earlier post) cater beautifully for the offline production environment. But now it seems we are getting something for run-time environments too. According to a recent update from Bernard Rodrigue, Software Developer and User Experience Specialist working at Audiokinetic, Wwise 2013.1 will include EBU/ITU loudness metering. Given Audiokinetic’s focus on mixing this past few months, I fully expect this addition to be intuitive and very nicely integrated to the new mixer improvements (aux bus routing etc) overall.
HDR coming to Wwise.
The Montreal International Games Summit was buzzing last week with news and info about a groundbreaking mix system (made famous by DICE in their Frostbite engine) coming to Wwise.
This presentation by Simon Ashby of AudioKinetic laid the groundwork…
HDR Audio Mixing: Myths, Facts and Techniques Behind HDR Audio
"Due to the non-linearity and unpredictability of game audio, much thought has been given to automatic and intelligent mixing tools over the last few years. HDR (High Dynamic Range) audio has received a lot of attention after the huge success of a well-known franchise which used HDR to improve the quality of their in-game audio mix. The lecture will explain in details what HDR audio consist of in theory, and how game developers really use it in practice. Concrete examples will be presented for two different approaches. The first operates at the audio level using a combination of dynamic effects such as compression and limiting. The second operates at the logical level where all volume attenuations are computed before mixing. Finally, in addition to the HDR system, other mixing techniques will be presented."
Bernard Rodrigue, software developer at AudioKinetic, tweeted the above image as a sneak peek of the feature set we can expect to see, as well as confirming that the feature is scheduled for Q1 of 2013. It is another in a long line of exciting developments for AudioKinetic; in addition to their already stellar mix feature sets, and the recent addition of bus metering, aux busses and aux sends, HDR brings Wwise up to a fully featured cutting-edge powerhouse and showcase of mix technology.
Special thanks to Silvain Jannot ( @sjannot ) for live tweeting from the presentation.
Nuendo 6: Loudness Under Control
Exciting updates last week from Steinberg regarding the upcoming Nuendo 6…
“Setting new standards — Nuendo 6 Loudness Lane
Nuendo 6 sets new standards in loudness measurement. While common loudness tools only show specific values in real-time, Nuendo 6 writes a loudness curve on a separate track-based on short-term loudness, which dramatically helps to judge if the mix is EBU-compliant. Loud and quiet scenes can now be mixed with unmatched precision, adding that extra finesse to your audio material that makes it well balanced.”
More info at steinberg.net
Nuendo 6: Advanced Mixing Console
“Nuendo 6 sets new standards in mixing. The new mixing surface of Nuendo 6 provides instant access to all vital functions of the mixer thanks to a sleek single-window concept and allows for a multitude of visibility configurations and drag-and-drop functions for faster operation.
Other major improvements include the intuitive Quick Link system and Control Link groups to temporarily or permanently link entire channels or only specific parameters. The all-new View Sets store and recall any channel configuration as preset.
Each of the up to four different MixConsoles comes with its own channel visibility management, allowing you to define which channels you want to see and how they are arranged.”
“The creative process starts and ends with professional, respectful collaboration. Discussions with the director, being a part of the script writing and voice recording, all the way through production, gradually building up the soundtrack. This approach allows everyone to investigate, to try various solutions with sound and visuals before settling on anything final. An important distinction with the creative collaborative work field of film and games is that any sound production or post-production is the culmination of a collaborative creative team’s ongoing work together, and not an animator, designer, artist, sound effects designer or composer working alone and handing finished work over to the next person in line. Because we work in different dimensions, we can work together on the shaping the same thing at the same time. Sound is interesting (and inherently challenging) as it relies heavily on the notion of other work being ‘finished’ before it can be ‘finished’. The collaborative team recognizes that sound is involved all the way through the shaping process, and not just the end. When the final mix is reached, there are still a lot of critical creative decisions that need to be made, usually in terms of what to favour at any given moment, voice, FX or music, but mostly about ‘clarity’. These are decisions that need to work with the story, character point of view and gameplay intensity, and so again, the discussions and ideation around what will work should continue to be collaborative with the production team during this time.
Getting the right people in at the right time can be complicated, politically, geographically and temporally (these same people who need to be a part of the process, very often also need to be a part of the process elsewhere).
What has always impressed me about film sound mixes, is the presence of the director and other deeply invested parties. Not only is this a welcome presence, it is absolutely a necessary one. This points to what a final mix really is all about, and that is the honing and sharpening of the intended experience for the audience. The technical aspects of the mix, the complexities of loudness, panning, levels, mix downs and measurements, though present and most assuredly being catered for, are (generally) not the topics for discussion with a director during a film mix.
I think that in games, finding these valuable collaborators is considerably more difficult, they should be the primary stake-holders, essentially the people who care about the totality of the final product - and i’d like to think it would be obvious by the time that you reach the final mix, in whatever form that takes, who these people are. I would suggest members of the core-IP team would fit the bill here, however, the timing of any kind of final mix on a game production is crucial here, as these people are inherently ‘crazy busy’ at this time too. This notion of collaboration is something I’m always working on with every team and every project, and I think that if collaboration and trust is present from the beginning of the project, then by the end, the collaboration on the mix will be less complicated. What I would say is that any mix that involves multiple collaborators needs to be run in a very professional manner, almost as though it were an extended meeting. Keeping things moving, keeping attention on sound, discussing important points, tabling and making notes and action items of lesser notes. Anyone not contributing, or listening, should probably stop looking at their iPhone and leave the room. It can certainly complicate the mix if these sessions start to drag and bog down any meaningful progress, so assess the contributors on a day to day basis. Perhaps try out a larger group to identify valuable (and interested) contributors, and then whittle that group down as the days go by, or have one day per week where the group comes in to contribute.
Finally, this collaboration is about making a better game, so remember, some of the most valuable contributions of a mix come from discussions around the content, character pov, emotion, an intention or part of the story or gameplay that may be getting lost, or have been misinterpreted by the sound design, and can be clarified and (usually) addressed. A mix is about so much more than volume levels. It is as much a part, if not the crowing moment, of a long collaborative journey.”
Some of the many loudness issues in video game sound get good coverage in this superb piece by Shaun Farley for Game Developer Magazine…
"It is important to remember that metering, even loudness metering, is merely a tool for mixing. It provides [objective] feedback, and helps the sound professional predict how a mix will behave on other systems. It should not dictate your artistic choices, but inform them"
I’m catching up on a few mix related links that have gone around lately. DesigningSound has a great interview that is really worth reading with Garry taylor of Sony Europe about the loudness standards his group is beginning to implement for first party developers.
Huge kudos to Garry and the teams at Sony for pushing on this, it is a shrewd move and will almost certainly see other large groups studying this area more closely.
Gamasutra featured a full mix post-mortem that I put together after we shipped Prototype 2. The piece tries to cover everything from pre-production to production in as much detail as I could provide on the thought process, planning and technical approaches.
The full article can be read over at gamasutra here.
I’ve been fortunate enough to have some hands-on with the new FMOD Studio Alpha build, and I recently had chance to get in-depth with Raymond Biggs, Lead Tools Developer at Firelight Technologies, about their exciting new toolset.
Firstly, thanks for taking the time out from your busy schedule to chat. Could you give us a brief overview to FMOD Studio and where it sits in the lineage of FMOD.
My pleasure! FMOD Studio is our next-generation sound design, composition and production tool for games, and it’s the successor to FMOD Designer. It’s a completely new tool that draws its inspiration from DAW workflow but is tailored to games. Hopefully this means that if you’ve used a DAW before, you’ll immediately be comfortable working with FMOD Studio.
Reducing the learning curve was one of our major goals for Studio. We wanted newcomers to get up and running quickly, but also make features progress logically so you can easily discover how to do something without resorting to the manual. A major part of this is using familiar terms and concepts.
I can say that already, from playing with the alpha build, the accessibility is totally there, after just playing around for a few minutes you find yourself saying ‘I get it!’ - Where did the fundamental ideas and philosophy behind the concept come from?
Actually the core idea goes back to the inception of FMOD Designer and the early days of FMOD when the company was just Brett Paterson (CEO) and Andrew Scott (Development Manager). FMOD had just released the low-level sound engine and API for games. Back then it was common for programmers to hard code audio file paths directly into the game code. So they wanted to create a high-level “data driven” tool for sound designers - a tool that would let sound guys do their thing without needing programmers involved.
There were a couple of tools available, such as ISACT by Creative Labs and XACT by Microsoft, that improved the asset handling side of things, but Brett and Andrew had specific ideas for creative features - like automation and blending sounds based on game parameters as well as multi-layered sounds. There was a copy of Vegas by Sonic Foundry kicking around the office and they saw the potential of applying a multi-track UI to game sounds. Hence the first few versions of Designer were heavily influenced by the Vegas UI.
That was the core idea - to take the UI of DAWs and applying it to games. However, over the years and through iterations of development Designer strayed further and further from that concept. With FMOD Studio we wanted to return to that original idea and we reworked all the features of Designer from scratch with that philosophy in mind.
I’m very excited by the hardware control surface integration, and having played around with the Alpha build for a while now (with a Mackie Control), I can say it feels very natural and intuitive and honestly makes these tools feel like a huge evolutionary leap from the fiddly, mouse-centric workflow that game audio tools have been hindered by. In fact, it isn’t until you have those hardware controls available that you realize just how fiddly the mouse workflow is to gameaudio. Could you talk about how the integration came about and what you felt it needed to do for the user experience?
Control surface support has been on the cards for a while and we toyed with the idea for Designer. However, making it work nicely with the Designer UI was a big problem. For example, controlling a screen full of text boxes with hardware faders would have been very odd. Ultimately, the UI just wasn’t suited for it. Because Studio’s UI is so closely aligned with DAWs it makes control surface integration very natural.
In fact SSL played a big role in making the integration feel right. We’d been using the SSL Nucleus as our lead hardware surface and they flew one of their guys over to help us make the user side of things as natural and intuitive as possible. Together we looked at how the Nucleus worked with a number of DAWs and decided to align our integration with Logic, because we liked the way Logic interacts with the Nucleus and we thought it would be the best fit for Studio.
In terms of workflow there are obvious benefits to controlling the mixer in Studio with physical faders instead of a mouse. What came as a surprise to me was how nice it was to use a control surface with the multi-track. Being able to physically press transport buttons and move faders to simulate game parameters - it’s hard to describe but it feels like you have direct control of the sound, it’s much more tactile and immediate.
As a consequence, hardware control plays a much bigger role when we’re designing new features for Studio, so much so that for the design of the mixer snapshots UI we focused on how we’d like the control surface to work first.
The game parameter hook-ups make so much sense for testing content and iterating with smooth control over transitions. The mixer window itself is a fundamental new aspect of FMOD Studio, could you talk a little about the kinds of control over the mix we can expect to see (i.e. state based snapshots, side chains, auto ducking etc)
Definitely mixer snapshots will have a big part to play in controlling the mix. Studio will have a priority based snapshot system with per property (i.e. bus volume and effect properties) scoping and blend settings. You’ll also be able to blend snapshots based on game parameters, to blend between different environments, for example. We think we’ve struck the right balance here between flexibility and ease of use.
Also, Studio will have side-chaining built right into the mixer. It’ll work like an insert meter and you’ll be able to control any property, not just volume. Because it’s a feature of the mixer rather than of individual effect modules, you’ll be able to use side-chaining to control any property right across the mixer, whether it be the cut-off of a low-pass effect, a send level, or the level of a VCA.
Could you talk a little about how these mix elements triggered within Studio?
Snapshots will simply be a module in the multi-track and will behave much like a simple sound or a nested event in that you’ll be able to place them on a track, have them triggered by either the timeline or a game parameter, and cross-fade between them. Also, you’ll be able to apply modulators, such as an AHDSR modulator to control fade-in and fade-out.
In terms of FX routing and runtime effects plug-ins inside the deck, can you tell us a little about how all that can be routed as well as the kinds of effects we can expect to be supported.
The routing inside the effects deck is fully flexible. We came up with the idea of placing the fader in the deck itself, so anything to the left of the fader is pre-fader and anything to the right is post-fader, with routing going from left to right. You’re free to simply drag effects around in the deck and place them wherever you like. Sends also appear as effect modules in the deck as well, so you have full control over where they sit in the signal path.
We’ll have a suite of built-in effects for Studio that will cover the basics with a focus on high performance and low memory usage. We’re also working with a number of 3rd parties including iZotope, McDSP, Little Endian to develop a suite of signature effect modules fully integrated with the Studio UI.
Are there any plans you can talk about with regard to supporting overall loudness measurement of the output using R-128 and ITU 1770?
Ultimately we imagine it running as a insert meter on the master bus, but it’s a toss up at the moment whether we develop it in house or work with a 3rd party to develop something more comprehensive. We’d definitely like users to have a bunch of metering options in Studio. By default Studio uses RMS + peak meters across the mixing desk but users have to option of switching to LKFS or QPPM if that’s what they’re used to. There’s no reason why we can’t have plug-ins to add other types of metering as well - it’s something we’re considering.
Could you go into a little more detail about the profiler and how that captures events triggered from the game and then (presumably) lets the user re-play those events and alter the mix parameters…
Essentially we wanted the profiler in Studio to work like a piano roll. The profiler taps directly into the sound engine and records when sounds are triggered, the movement of sounds in the game world, as well as any changes to game parameters. With this data the profiler is able to mimic the behaviour of the game during the recorded sequence. Because the profiler is simply replaying the input to the sound engine, you’re able to make live changes to events and the mixer, and immediately hear the effect of those changes to the output.
Run-time dynamicfiltering, FX processing, and parameter manipulation as part of the overall mix are also significant tools in the creation of compelling game sound. It feels as though these aspects of game sound have been under-developed until now. Do you see the mixing side of game audio getting more deeply embedded earlier in development as the games we make become more challenging?
I definitely see it as a whole new box of creative tools that will be increasingly used right across the board. Why limit these techniques to the main mix? Once the tools are available I think we’ll see them applied at all levels of the mix, right down to within individual sounds. I think the key to dealing with the complexity is to have a well-integrated framework that supports all these techniques and works logically across all levels. This is something we’ve worked hard at with Studio.
I think that’s a really exciting element of these kinds of accessible and flexible mix and sound manipulation tools, and I can’t wait to start to integrate some of these techniques into games much earlier in development. So, I’m sure everyone reading this is wondering when FMOD Studio is going to be hitting the streets, do you have a date in mind at the moment?
We’re aiming for our first public release in August.
Thanks again to Raymond and the whole team over at Firelight Technologies.
This awesome FREE Steinberg plugin allows loudness metering of the final mix in Nuendo 5.x and Cubase 6.5. Please note that SLM 128 is not officially supported by Steinberg.
Garry Taylor tipped me off to this software-based loudness meter. Seems to have everything you’d really ever need in there (ITU / EBU / rms / peak / true peak, spectral / RTA etc etc), and it looks really nice (always a plus for a meter… let’s face it, staring at lovely meters is a thing of comfort, like staring into a campfire… maybe that’s just me…)