Automated methods of musical composition have a broad range of history and techniques. Throughout the last seventy years, digital technology solidified the field adding stochastic computational methods and machine learning models, multiplying both efficiency and possibilities. Lately, procedural music generation shifted from ‘possible’ to ‘commercially viable’ driving a new wave of services and products. Videogames use procedural methods to modify audiovisual conditions in real time. Game developers currently use pre-produced music/sound design audio sequences, aiming for sound quality, balancing storage space and variety. In this paradigm, real-time manipulation is limited to adaptive mixing procedures (e.g., rules for overlapping, EQ, and effects), and recombination (e.g., random or sequential containers, stretching/pitch shifting, fragmentation). Those are standard options in popular sound design middleware platforms (FMOD and Audiokinetic’s Wwise) designed to allow adaptation, transformation, and multiplication of existent material according to gameplay. Nevertheless, recurring audio clips (especially music) are recognizable. In extended gaming sessions, the resulting repetition is unpredictably tiring, and reduces the musical storytelling support through generalized reappearance.
Although generative adaptive music has been implemented in select cases, it is not part of current commercial game development pipelines. While algorithmic note-by-note generation can offer interactive flexibility and infinite diversity, it poses significant challenges such as achieving human-like performativity and producing a distinctive narrative progression through measurable parameters or variables.
In this study, I introduce the Progressive Adaptive Music Generator (PAMG) algorithm, which uses parameters derived from gameplay variables to produce a continuous music stream output that transitions seamlessly between moods/styles and progresses among tension/complexity levels. I cover methodological bases found in the literature, identify implementation challenges, sample PAMG produced material, and present a test scenario that includes a trial videogame and a preliminary comparative perceptual experiment. The test shows that players tend to prefer PAMG over conventional music implementation in a number of capacities although results are not globally conclusive. It additionally suggests particular priming effects that occur in connection with incidental generative game music perception.
Lastly, I assess current experiment and PAMG’s limitations and future directions, and present a debate about authorship in the upcoming generative-AI context.