ECML/PKDD 2014 - Deep Learning Tutorial

Presenters: Aaron Courville and Hugo Larochelle

Deep learning is one of the most rapidly growing areas of machine learning. It concerns the learning of multiple layers of representation that gradually transform the input into a form where a given task can be performed more effectively. Deep learning has recently been responsible for an impressive number of state-of-the-art results in a wide array of domains, including object detection and recognition, speech recognition, natural language processing tasks, bio-informatics and reinforcement learning.

In this tutorial we will cover the foundations of deep learning: neural networks, convolutional neural networks, recurrent neural networks, autoencoders and Boltzmann machines. We will discuss why models with many layers of representation can be hard to learn and present strategies that have been developed to overcome these challenges. We will also discuss more recent innovations including dropout training that has proved to be an extremely effective regularization technique for training neural networks. Finally, we will cover some concrete and successful applications of deep learning.


A preliminary copy of the slides are here.

  • Feed-forward neural networks
  • Deep neural networks
    • difficulties of training
    • dropout
    • unsupervised pre-training
    • denoising autoencoders
  • Stochastic neural networks
    • restricted Boltzmann machine
    • deep belief network
    • deep Boltzmann machine
  • Applications
    • computer vision: convolutional networks
    • speech recognition
    • natural language processing: recurrent/recursive neural networks
  • Future trends


This tutorial is partly inspired by Hugo Larochelle's online course on neural networks:

References to papers on which this tutorial is based or on deep learning general can be found on the course's website:

The website also provides a reading list on deep learning: