Composer and adaptive music expert Rik Nieuwdorp shares his thoughts on the state of adaptive music in games.
[epic_dropcap style=”dark_ball”]A[/epic_dropcap]fter the initial and successful integration of adaptive music systems in the 90s like iMuse, it strangely disappeared from the scene, probably when full-quality, high-end scores were made possible through cd-rom technology and the likes. Now, adaptivity in-game music is slowly seeping back into ‘regular’ games, and I expect that process to continue.
Basic adaptivity in game music is regaining ground.
In my experience the idea of implementing an adaptive music system in a game is still something that originates from the audio department and composers, who would like to make the most of the auditory experience of the game. It’s often a battle to convince the game developer to put in a little bit more of an effort programming-wise to gain a lot more of an immersive gaming experience. In these cases the composers usually already know how to design such an adaptive system, of course, and are familiar with the added value of adaptive music.
That said, basic adaptivity in game music is starting to regain ground in even the most high-profile games – games like Killzone or Dead Space for instance – due to technological advances and a general growth in attention to all aspects of game development.
Indie game developers are usually more open to experimentation, and the smaller scale on which they operate allows for a closer coöperation between game developer and composers. In these projects it is easier for music and other game departments to influence each other and it can yield far more intricate and previously unimaginable results. This is also why we focus on these projects rather than the more high-profile games, which have a more sluggish development process. The possibility of a closer coöperation with all development parties so you can achieve what you want – or more – is very valuable.
Regarding mobile games, I believe technology is holding us back right now. Soon this won’t be a problem anymore, but right now I feel it is limiting the possibility of an extensive adaptive music system. We are working on an iPad game as we speak and unfortunately I’m forced to re-think and efficiently cut back on the adaptivity of the music system because, for example, the compression/decompression of long, simultaneous music layers is taking up too much processor power for a game that is already pretty processor-heavy in its graphics and physics. This will probably not be a problem anymore with the third or fourth iteration of the iPad, so that is when elaborate adaptive music systems will be possible for all types of games.
Right now, the emergence of adaptive music also creates opportunities for experienced composers. Everyone, game developers as well, has a nephew/friend/brother-in-law that can churn out a couple of tunes, and they are more likely to be hired for a nickel and a smile when the game devs consider audio to be a last point on the to-do list (which is often the case). However, being able to compose and produce and understand the process behind it, sets real game composers apart from the rest.
Good for us. •