Automating Music Production with Music Information Retrieval
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Prior research in the field of Music Information Retrieval has yielded techniques for extracting musical information from digital audio, and made it possible to analyze human music production computationally. I hypothesize that a computer can be programmed to produce output similar to that of a musical artist on two production tasks performed by disk jockeys. The first, “mixing,” aims to create seamless transitions between songs in a playlist. The second, “mashup” creation, aims to overlay multiple similar tracks to create a new combined song. To automate these tasks, I first created example sets of my own mixes and mashups, and looked for patterns and relationships in the audio analysis data from the public music analysis API provided by The Echo Nest, Inc. I used my findings to write Python scripts that automatically perform the mixing or mashing tasks on any input audio files. The software was udged by a sample of individuals for its ability to produce output similar to that of a human DJ. Preliminary results support the claim that automatic music production processes can be convincing, but also show how programs perform poorly when processing unexpected input, suggesting that tasks are most easily replicated within specific, predefined artistic styles.