20605 - MACHINE LEARNING II
Course taught in English
Go to class group/s: 31
Synchronous Blended: Lessons in synchronous mode in the classroom (for a maximum of one hour per credit in remote mode)
Solid knowledge of calculus, linear algebra and probability theory. Good understanding of basic statistical and Machine learning tools (e.g. simple and multiple regression, likelihood-based inferences, optimization). Intermediate level programming (e.g. Python, R or Matlab).
The course introduces students to some frontier research topics in Statistics and Machine Learning. Students are exposed to state of the art methodologies for inferences and prediction and are trained to develop a principled and thoughtful approach towards Machine Learning. The first part of the course deals with Bayesian Nonparametric theory. We start with foundational and theoretical issues, overview popular models and investigate their inferential implications and showcase computational methodologies and popular applications areas. The second part of the course focuses on modern applied machine learning. We present recent developments and breakthroughs in artificial neural networks and deep learning and reinforcement learning. An introduction to PyTorch is also provided.
- Foundations of Bayesian Nonparametrics: exchangeability and de Finetti’s representation theorem.
- Nonparametric priors: definition, distributional properties and Bayesian nonparametric models.
- Computational methodologies and sampling algorithms for Bayesian Nonparametrics.
- Bayesian unsupervised learning and computation: species sampling and mixture models, topic modeling in document analysis, probabilistic matrix factorization and tensor decomposition in networks and recommender systems.
- Supervised classification: Vapnik-Chervonenkis dimension, multi-layer perceptrons as universal approximators, regularized cross-entropy minimization as maximum a posteriori inference.
- Training: first order methods, reverse differentiation, initialization, covariate shift
- Regularization and data augmentation: weights and logits L2, dropout and stochastic depth, local entropy; mixup
- Introduction to PyTorch and computational aspects
- Specialized models and beyoond: convolutional networks, recurrent networks, attention mechanisms; generative models; reinforcement learning
- Have an overview of cutting-edge statistical and machine learning methods from a theoretical and methodological perspective.
- Understand the assumption and modeling implications underlying machine learning methodologies.
- Decide which method best fits a given problem.
- Understand the foundations of these methods in a way to allow to explain their implementations step by step.
- Understand the problems and pitfalls of testing and applying these methods.
- Design modern models for a given applied problem using Bayesian nonparametrics, Monte Carlo methods, neural networks, reinforcement learning or boosting.
- Understand the results in terms of the characteristics of the chosen method.
- Face-to-face lectures
- Online lectures
Lectures face-to-face and online.
|Continuous assessment||Partial exams||General exam|
The assessment consists in an individual presentation of a research paper (or book chapter) selected among a list provided by the instructors. The papers are related to the topics of the lectures and their understanding requires knowledge of the models and methods covered during the course. The presentations are expected to showcase the acquired theoretical and methodological skills and to critically discuss the assumptions, modeling choices and methodologies implemented in the selected paper.
A reading list of papers and book chapters suggested by the instructors is provided at the beginning of the course.