curriculum
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
curriculum [2025/07/24 09:26] – respai-vic | curriculum [2025/07/31 16:35] (current) – [Period 1] respai-vic | ||
---|---|---|---|
Line 4: | Line 4: | ||
Mandatory: | Mandatory: | ||
- | * APM_51438_EP: | + | * Refresher in Statistics (APM_51438_EP, |
1 course among: | 1 course among: | ||
- | * CSC_51438_EP: | + | * Refresher in Computer Graphics (CSC_51438_EP, |
- | * CSC_51440_EP: | + | * Refresher in Computer Science (CSC_51440_EP, |
Line 70: | Line 70: | ||
**APM_53440_EP - Advanced unsupervised learning (24h, 2 ECTS), Pierre Latouche(UCA)** (contact: pierre.latouche@uca.fr) | **APM_53440_EP - Advanced unsupervised learning (24h, 2 ECTS), Pierre Latouche(UCA)** (contact: pierre.latouche@uca.fr) | ||
- | > ABSTRACT TO BE ADDED | + | |
+ | This course aims at presenting advanced models and methods from computational statistics and machine learning for unsupervised learning, through the context of directed graphical models. Both the frequentist and Bayesian frameworks will be covered. In particular, we will study clustering methods and we will focus on mixture models and on the expectation maximisation algorithm for inference. The standard model selection criterion will be derived and we will illustrate their use to estimate the model complexity. After having given the theory of directed graphical models, we will show on a series of examples how the classical models from machine learning can be characterized in such a context. We will show how all the standard loss functions in machine learning can be linked to specific directed graphical models, giving rise to new ways to think about problems and to solve them. We will particularly insist on the interest of relying on directed graphical models to evaluate the complexity of the inference task for various models. The variational framework will be presented along with the variational EM and variational Bayes EM algorithms. The inference task of the stochastic block model, with a two-to-one dependency, will be described. The second part of the course will be about the use of deep neural networks in directed graphical models to obtain the so-called deep graphical models. Variational autoencoders and GAN we will be presented in such a context along with the corresponding inference strategies. We will finally show on a given example how the main model for social network analysis can be rewritten and extented with deep graphical models. We will point out the modifications in terms of directed graphical models, inference, and applications. In particular, we will describe the use of graph neural networks. Lectures will be done through slides and proofs on the blackboard. In labs, we will implement and test all the approaches seen in the lectures. | ||
+ | |||
+ | |||
| | ||
- | **APM_53441_EP - Learning | + | **APM_53441_EP - From Boosting to Foundation Models: learning |
- | > ABSTRACT TO BE ADDED | + | > This course offers a modern overview of statistical learning with tabular data, from classical tree-based models to emerging deep and foundation models. We will review the foundations of gradient boosting methods and their scalable implementations, |
Line 129: | Line 133: | ||
==== Period 3 ==== | ==== Period 3 ==== | ||
- | MAP/ | + | INT_54490_EP |
curriculum.1753349179.txt.gz · Last modified: 2025/07/24 09:26 by respai-vic