Join us on December 4, 2019 for our next meetup in Zurich.
Let’s talk about deep learning and what you can do with it! In particular, we’ll tackle the LSTM units and how to use them to generate free texts. The first talk by Kathrin Melcher will give an introduction to recurrent neural networks and LSTM units. The second talk by Rosaria Silipo will show a practical application for free text generation.
- 18:00 - Registration & Welcome
- 18:30 - Introduction to LSTM units in Deep Learning Architectures
- 19:15 - Yo! AI Generated Rap Songs
- 20:00 - Networking
Introduction to LSTM units in Deep Learning Architectures - Kathrin Melcher
LSTM units in deep learning architectures are the state-of-the-art for sequence analysis. In this presentation we’ll find out what recurrent neural networks are and how they are trained, what LSTM units are and how they can remember or forget the past. We’ll also learn the difference between many to one, many to many, and one to many neural architectures.
Yo! AI Generated Rap Songs - Rosaria Silipo
This post is about generating free text
with a deep learning network
particularly it is about Brick X6,
make you feel soom the way
I probably make
More money in six months,
Than what's in your papa's safe
Look like I robbed a bank …
You’d think I can rap. I cannot. The rap song up here was written by my deep learning LSTM-based, rap-trained recurrent neural network. We’ll show you how to prepare the data and build, train, and deploy a deep learning LSTM-based recurrent neural network for free text generation. Once built and trained on an appropriate training set, the network can be used to generate other kinds of free texts - for example Shakespearean text.