As music producers for video games we usually discuss our art in terms of linear and non-linear music. While this type of discourse is both descriptive and useful when talking about a game’s soundtrack, it also generates false and inaccurate assumptions of how music is created and played back in video games.
When analyzing video game narratives using an MDA framework, we can see that non-linearity is primarily an emergent phenomenon of a game’s aesthetics, rather than its explicit design. Further investigation shows that all music is created through a mixture of arranged and performed elements, and that these are not inherently non-linear. The same analysis shows that video games are also created with similar properties. Thus, each sequence in a video game contains varying degrees of performability and arranged elements, which the player then perceives as non-linearity.
The study concludes that video games is an autodramatic artform, in which the players themselves acts as performers through a series of sequences that game developers have designed. Producers will therefore come to a more accurate understanding of their craft by shifting the discourse, from a ”linear vs non-linear”, to an ”arranged vs performed” terminology.
Why is music designed for games?
To answer this question, we should look at howgames themselves are created. Because the purpose of creating a game is not necessarily the same as what motivates the introduction of music to its soundtrack. It can be argued that the video game music is subordinate to its game’s design and logic since it will primarily be created for that context. However, it does not mean that video game music can be constrained to its context.
Jan-Olof Gullö (2019) revamps SVID’s definition of design and states that it is “[…] the process of developing solutions in a conscious and innovative manner where both functional and aesthetic demands are met from the needs of the end user.” 
Who is the end user for video game music? It may seem like a relatively straightforward answer to claim that it is the gamer, but you will see that it’s not that simple. Let us first look at the context of the game itself.
Because of the complexity in game development, we need to understand that most composers for games are only hired to produce a fragment of a solution – in short “to create music which makes the game more fun”. This propensity for game developers to want their game to be fun comes from the discourse within the game designer community. Generally, all institutions of this field teach us that the core motivation for a gamer is to keep playing a game that is fun. And that “fun” is an archaic term that describes several sub-motivations: such as sensation, fantasy, narrative, challenge, fellowship, discovery etc. These are considered the product of a game’s aesthetics according to the MDA framework (Hunicke et. al, 2004).
According to this framework, the rules for a game are at the core of its design, visualized in figure 1. When a player interacts with those rules several systems goes into effect because of the gameplay, regardless of whether they have been deliberately designed or are unintentional. A player experiences these systems, and if they resonate with the gamer’s motivations it is considered a “fun” game.
As you can see in figure 2, when the categories of figure 1 is translated into the vocabulary of game design, rules are considered the mechanics of a game. Systems turns into the dynamics of gameplay that emerges from the underlying rules and mechanisms. And finally, a player perceives the sensory experience of gameplay as the aesthetics of the game.
Here we can discover the difficulties of creating art, and in our case music, for video games. Because the solutions we as composers are hired to produce are entirely dependent on two factors: 1.) The game’s intended target audience and envisioned aesthetics, and 2.) The underlying mechanics that the game designer needs to produce in order to achieve gameplay that is “fun”. Take a look at figure 3. When game designers produce a game, they basically only have control over the mechanics in the game. And when players interact with a game, they primarily experience it from the aesthetics that emerges from the gameplay.
We can see that to create and design music that is aesthetically fulfilling for a video game, we need to understand more than just how music is made and perceived, but also how it is going to behave within the context of the game, and how that behavior should be accomplished. I think this is the reason why composers rarely get the trust to produce the complete musical experience; from the establishment of design, through implementation in the code base, to ultimately be in charge of the sonic and musical products of the compositions. They simply lack basic insights in game development and design.
So, to bring it back to the question of who the end user is; Isn’t this enough to conclude that the end user is the gamer? but that the process of creating a successful soundtrack is very complicated? No, it is not enough.
Like I mentioned earlier in this chapter, the music in a video game is not necessarily constrained to the context in which it will be created and played back within.
Creating music using the MDA framework
In the example from the chapter above where mechanics and aesthetics were discussed as the main vantage points for making decisions about how music should be created for a game, we skipped the middle step in the MDA framework. A game’s dynamics is not entirely in our control as designers. But that’s not a reason to disregard it as “unimportant” because we should still look at what a game’s dynamics means for music, and how we can use the entire MDA framework when producing music for video games.
First of all, lets draw some parallels between the MDA framework and music as its own medium. If we apply the principles of Mechanic-Dynamic-Aesthetics to music we could claim that the structures and rules of music and its genres are underlying mechanisms that we have direct control over. The subsequent performance of those rules and mechanisms becomes the dynamics of a musical system (not to be confused with the term dynamics in music theory for this context). Out of this musical system, a listener will perceive the resulting aesthetics as it resonates with their core motivation for listening to the music.
We can see that by using the terminology from the field of game design on music, it will open the possibility of understanding what role music has within the context of a game and how it can be created and played back so that it fulfills and even amplifies its intended purpose.
As designers of a gaming experience we can ask ourselves what types of aesthetics we want to produce for the players. For example, let’s say that we want to create a sense of discovery for the players when they explore an area for the first time. We therefore decide that we should play a specific kind of music when we enter that area. With this decision we have designed a type of dynamic in the game. However, if we play this piece of music every time the player enters the area, the subsequent moments of discovery will not match the aesthetic implication of hearing that specific music only once. This type of incongruency between the game’s different mechanics and aesthetics may lead to ludonarrative dissonance, which is a term used to describe a phenomenon where game design counteracts its intended purpose. To avoid ludonarrative dissonance in this example, we can decide to create a mechanic for the musical playback which only allows the specific piece of music for discovering that area to be played once during the entire gameplay.
Now you may still be asking yourself “Well… if the end user of game music isn’t the gamer, then who is?”. Before we can confidently answer that question, we first have to look at what happens with game music when it is being played back.
“Non-linear” music is bogus
An established jargon among game composers is the idea around “linear vs non-linear” music. This comes from the notion that we as composers have no control over how the music will be played back. The game composer Richard Vreeland, known under the pseudonym Disasterpeace, has described it as “Instead of thinking about order, like usually in music, it’s more like thinking about proximity. Like what notes would sound best together eventually”.
However, by doing an MDA framework analysis as we have just learned we can start using that lens to look at this perceived dynamic. Let’s draw a parallel to music as its own medium again. When a DJ like Daft Punk performs in a club, the music and songs that streams out of the loudspeakers is arguably not following the pattern that the audience is used to hearing them. But critics don’t call the music “non-linear”, nor do the audience worry about it when engaging in the spectacle. Similarly, when guitarists like Eddie Van Halen stands in front of tens of thousands and let their guitars rip, they are not getting screaming ovations because of their “non-linear” performance. Music simply doesn’t work like that, even if the perception of those instances could be that it has an unpredictable and “non-linear” aesthetic.
Now, the keyword here is performance. Because for music to happen, we need to set up the possibility for it to happen. And whoever/whatever is going to perform music will do so in some kind of arranged environment. Either we listen to a strictly arranged Bach fugue as it was written, or we listen to a free jazz performance where not even the performers knows what’s about to come. The degree of arrangement will have a direct influence on how the music will be performed and ultimately perceived by its audience. This dichotomy of “arranged vs performed” is what the notion of “linear vs non-linear” music descends from.
This is the most fundamental mechanic of music for video games, and furthermore game design as a whole. Designers arrange a game environment with rules and mechanisms in which the player performs and acts on their own motivations. The degree of arrangement declares how much performability is in the game, and the performability during gameplay is then perceived as an aesthetic of “non-linearity”.
So, the answer to “Who is the end user of video game music?” becomes even more problematic. What is the end state of music that is being performed during playback? That is the problem which a music designer should solve. Because, like I mentioned in the first chapter, video game music can’t be constrained to the context of the game. It can also be performed outside of the game. When the music resonates with the player’s and listener’s core motivations, there will be a strong demand for experiencing that music and fulfill that need from the actual end user. And that end user may not be a gamer, perhaps they’re not even aware of the game which the music was designed for.
In such a case, what used to be a design where music was performed during gameplay should perhaps transform into a meticulously arranged performance instead, where there is little room for improvisation and “non-linearity”.
By looking at video game music from the lens of a game designer, I have shown that the subject of music design for games is an unexplored area from both the perspective of game developers as well as music composers. For instance, when a game project is being developed, how do we know that certain mechanisms creates specific aesthetics and vice versa? Music composers and game designers should therefore collaborate to create as much ludonarrative harmony as possible within the game they are working on.
I have also shown that video games is a medium in which players perform some degree of the art by themselves. This emergent behavior is properly termed autodramatic, from the words auto (self) and drama (to do, act, perform).
Upcoming studies should challenge these ideas and test them in a way that pushes the boundaries of how much music can be designed according to game design theory, and where the subjects of music composition and game design starts trespassing on each other.
 Gullö, J-O (2019), Design – några anteckningar till 20191021.
 Hunicke, Leblanc, Zubek (2004), MDA: A Formal Approach to Game Design and Game Research. https://users.cs.northwestern.edu/~rob/publications/MDA.pdf
 Vreeland, R (2013), YouTube video: Philosophy of Music Design in Games – Fez. https://youtu.be/Pl86ND_c5Og?t=1783
I hope you find this topic as interesting as I do 🙂 There is still a lot to unpack surrounding music in video games and I’m looking forward to 2021, and a whole year of diving even deeper into the world of music in all sorts and forms. Check out my portfolio that I’m currently updating with some selected works from my studies.
I have a few exciting projects coming up. There is especially one I’m very hyped about that is an adventure game set in a mystical and unforgiving world that is infected with a decease that wants you dead. I think it’s a really fitting project right now with the recent pandemic that has affected every single one of us. Hopefully I’ll be able to share the details in a very short amount of time.
Until then, I will release an interview I had with Sebastien Najand, who is a Principal Composer at Riot Games. Sebastien and I had a great conversation in November where we talked about his alternate reality project K/DA, and his work with Riot Games and what led him there. We had a very interesting discussion and you will be able to watch the interview on the Friday 18th.
Be healthy and stay strong everybody 🙂