Week 1 – Generative Music – project

The project I found is called Performance RNN by Ian Simon and Sageev Oore. This project is posted on Magenta and I came over this project while reading Kyle McDonalds article Neural Nets for Generating Music.

As described by the creators Performance RNN is an LSTM-based recurrent neural network designed to model polyphonic music with expressive timing and dynamics.

Basically, as far as I understood the project, all the sound(notes) are pre-made, the system itself does not create the original sounds. However, via a stream of MIDI events, the system generates expressive timing and dynamics of those notes.

Because for a lot of times when system creates generative music pieces, there is a lack of performance in it(“with all notes at the same volume and quantized”), which could be achieved by manipulating the speed of a note,  the space between the notes or something like how hard to strike the note.

The Performance RNN therefore uses note-n and note-off events to define the pitch, the velocity, the feelings of the notes and in that sense, generates music pieces that are more emotional and performative.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s