A Keras Model Learns to Play Piano - Training an RNN with Multiple Output Sequences

June 29, 2022 - Online
A Keras Model Learns to Play Piano - Training an RNN with Multiple Output Sequences

Can AI generate music? And how good would this music be? Discover with us how this can be done with a low code approach. Recurrent Neural Networks (RNNs) are a family of neural networks, which are especially powerful when working on sequential data. Thinking about music it is clearly a sequence of notes. So can RNNs be used to generate music?

In recent years, deep learning architectures have brought us solutions to previously unsolved problems. But for most tools you still need to overcome the coding barrier. At this webinar, you’ll see KNIME’s Deep Learning Keras integration in action for defining, training, and deploying DL models all without coding.


Join our webinar to learn about all steps involved in a music generation project and listen to the AI trained by our data scientist Kathrin:

  • Reading and preprocessing midi files

  • Defining, applying, and training a deep learning model that predicts a sequence of notes and their duration and offset

  • Converting the predicted output into a midi file


How do I join the webinar?

You’ll receive a link with your registration confirmation. Make sure you have a stable internet connection!

Will I be able to ask questions?

Absolutely - fire away!

Where do I find the latest version of KNIME Analytics Platform?

Download the latest free, open source version of KNIME here: knime.com/download.

What other resources will help me to get started in KNIME?
You might also like Show all events