Our previous work on MuseNet explored synthesizing music based on large amounts of MIDI data. We chose to work on music because we want to continue to push the boundaries of generative models. We can then train a model to generate audio in this compressed space, and upsample back to the raw audio space. One way of addressing the long input problem is to use an autoencoder that compresses raw audio to a lower-dimensional space by discarding some of the perceptually irrelevant bits of information. Thus, to learn the high level semantics of music, a model would have to deal with extremely long-range dependencies. For comparison, GPT-2 had 1,000 timesteps and OpenAI Five took tens of thousands of timesteps per game. A typical 4-minute song at CD quality (44 kHz, 16-bit) has over 10 million timesteps. Generating music at the audio level is challenging since the sequences are very long. A different approach is to model music directly as raw audio. īut symbolic generators have limitations-they cannot capture human voices or many of the more subtle timbres, dynamics, and expressivity that are essential to music. This has led to impressive results like producing Bach chorals, polyphonic music with multiple instruments, as well as minute long musical pieces. A prominent approach is to generate music symbolically in the form of a piano roll, which specifies the timing, pitch, velocity, and instrument of each note to be played. Automatic music generation dates back to more than half a century.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |