Deep Learning

Objectives

- Understand the design principles of neural networks;

- Understand the concept of activation function;

- Understand the backpropagation algorithm for training a neural network;

- Being able to build a neural network to solve classification tasks;

- Being able to use Keras or similar libraries to build a Neural Network;

- Understand the convolution operator and the idea behind convolutional neural network;

- Understand the main principles of recurrent neural network; 

- Understand LSTM and how they can be applied to counteract vanishing gradient problem.

- Being able to apply one of the deep model presented to solve classification or regression tasks.

General characterization

Code

200180

Credits

3.5

Responsible teacher

Mauro Castelli

Hours

Weekly - Available soon

Total - Available soon

Teaching language

Portuguese. If there are Erasmus students, classes will be taught in English

Prerequisites

N/A

Bibliography

Deep Learning. Ian Goodfellow, Yoshua Bengio, Aaron Courville. MIT Press, 2016.

Teaching method

Theoretical and practical classes.

Evaluation method

First epoch: deep learning project.

Second epoch: deep learning project.

Subject matter

Introduction to deep learning

History and cognitive basis of neural computation.

The perceptron / multi-layer perceptron

The neural net as a universal approximator

Optimization by gradient descent

Back propagation

Overfitting and regularization (Dropout)

Convolutional Neural Networks (CNNs)

Training with shared parameters: the convlutional model

Recurrent Neural Networks (RNNs)

Exploding/vanishing gradients Long Short-Term Memory Units (LSTMs)

Programs

Programs where the course is taught: