curriculum
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
curriculum [2025/07/17 13:36] – [Period 2] respai-vic | curriculum [2025/07/31 16:35] (current) – [Period 1] respai-vic | ||
---|---|---|---|
Line 3: | Line 3: | ||
===== MASTER 1 (M1) ===== | ===== MASTER 1 (M1) ===== | ||
+ | Mandatory: | ||
+ | * Refresher in Statistics (APM_51438_EP, | ||
+ | |||
+ | 1 course among: | ||
+ | * Refresher in Computer Graphics (CSC_51438_EP, | ||
+ | * Refresher in Computer Science (CSC_51440_EP, | ||
- | * MAP538: Refresher in Statistics (Marine Le Morvan) | ||
- | * INF538: Refresher in Computer Science (Marie-Paule Cani), presenting an introduction to 3D Computer Graphics | ||
- | All subsequent M1 courses are 36h and will credit 4.5 ECTS. | + | All subsequent M1 courses are 36h and will credit 4.5 ECTS. Note that students need to mandatorily either choose a Deep Learning course either in Period 1 (CSC_51054_EP) or in Period 2 (APM_52183_EP). |
==== Period 1 ==== | ==== Period 1 ==== | ||
Mandatory: | Mandatory: | ||
- | * Computer Animation (INF585, Mathieu Desbrun and Marie-Paule Cani, EP) | + | * Computer Animation (CSC_51085_EP, Mathieu Desbrun and Marie-Paule Cani, EP) |
+ | * Image Analysis and Computer Vision (CSC_51073_EP, | ||
+ | * Machine Learning (MDC_51006_EP, | ||
- | 1 course among: | ||
- | * Machine and Deep Learning (INF554, Michalis Vazirgiannis, | ||
- | * Foundations of Machine Learning (MAP553, Erwan Le Pennec, EP) | ||
- | 2 courses | + | 1 course |
- | * Topological Data Analysis | + | * Deep Learning |
- | * Image Analysis and Computer Vision (INF573, Mathieu Brédif, EP & IGN) | + | * Digital Representation and Analysis of Shapes (Recommeded option, CSC_51074_EP, Mathieu Desbru and Pooran Memari, EP & Inria) |
- | * Digital Representation and Analysis of Shapes (INF574, Mathieu Desbrun, Pooran Memari, EP & Inria) | + | * Signal Processing (Recommeded option, APM_51055_EP, Rémi Flamary, EP) |
- | * Signal Processing (MAP555, Rémi Flamary, EP) | + | * Topological Data Analysis (CSC_51056_EP, |
- | + Mandatory non-scientific courses | + | + Mandatory non-scientific courses: |
- | + | * Introduction to Marketing and Strategy (IME_51456_EP, | |
- | * Fundamental of Strategy and Innovation (MIE555) or Introduction to Marketing and Strategy (MIE556, Workload ++) | + | |
* Sport | * Sport | ||
* Humanities | * Humanities | ||
Line 34: | Line 36: | ||
==== Period 2 ==== | ==== Period 2 ==== | ||
- | 2 Mandatory | + | Mandatory: |
- | * Advanced Machine | + | * Reinforcement |
- | * Computer Vision: From Fundamentals to Applications | + | * Multimodal Generative AI (CSC_52002_EP, Vicky Kalogeiton, EP) |
- | 2 scientific | + | 2 courses among: |
- | * Advanced | + | * Deep Learning (Recommended option, APM_52183_EP, Kevin Scaman) |
- | * Image Synthesis: Theory and Practice (INF584, Tamy Boubekeur, Telecom ParisTech) | + | * Image Synthesis: Theory and Practice (Recommended option, CSC_52084_EP, Tamy Boubekeur, Telecom ParisTech) |
- | * Real-time AI in Video Games : Decisive & Collaborative Actions | + | * Statistics |
- | * Statistics in Action | + | * Advanced Deep Learning |
- | * Computational Optimal Transport for ML and Generative Modeling | + | * Graph Representation Learning (CSC_52072_EP, |
+ | * Emerging Topics in Machine Learning | ||
- | + Mandatory non-scientific courses | + | + Mandatory non-scientific courses: |
- | * | + | * |
* Sport | * Sport | ||
* Humanities | * Humanities | ||
Line 53: | Line 56: | ||
==== Period 3 ==== | ==== Period 3 ==== | ||
- | MAP/ | + | INT_52406_EP |
Line 67: | Line 70: | ||
**APM_53440_EP - Advanced unsupervised learning (24h, 2 ECTS), Pierre Latouche(UCA)** (contact: pierre.latouche@uca.fr) | **APM_53440_EP - Advanced unsupervised learning (24h, 2 ECTS), Pierre Latouche(UCA)** (contact: pierre.latouche@uca.fr) | ||
- | > ABSTRACT TO BE ADDED | + | |
+ | This course aims at presenting advanced models and methods from computational statistics and machine learning for unsupervised learning, through the context of directed graphical models. Both the frequentist and Bayesian frameworks will be covered. In particular, we will study clustering methods and we will focus on mixture models and on the expectation maximisation algorithm for inference. The standard model selection criterion will be derived and we will illustrate their use to estimate the model complexity. After having given the theory of directed graphical models, we will show on a series of examples how the classical models from machine learning can be characterized in such a context. We will show how all the standard loss functions in machine learning can be linked to specific directed graphical models, giving rise to new ways to think about problems and to solve them. We will particularly insist on the interest of relying on directed graphical models to evaluate the complexity of the inference task for various models. The variational framework will be presented along with the variational EM and variational Bayes EM algorithms. The inference task of the stochastic block model, with a two-to-one dependency, will be described. The second part of the course will be about the use of deep neural networks in directed graphical models to obtain the so-called deep graphical models. Variational autoencoders and GAN we will be presented in such a context along with the corresponding inference strategies. We will finally show on a given example how the main model for social network analysis can be rewritten and extented with deep graphical models. We will point out the modifications in terms of directed graphical models, inference, and applications. In particular, we will describe the use of graph neural networks. Lectures will be done through slides and proofs on the blackboard. In labs, we will implement and test all the approaches seen in the lectures. | ||
+ | |||
+ | |||
| | ||
- | **APM_53441_EP - Learning | + | **APM_53441_EP - From Boosting to Foundation Models: learning |
- | > ABSTRACT TO BE ADDED | + | > This course offers a modern overview of statistical learning with tabular data, from classical tree-based models to emerging deep and foundation models. We will review the foundations of gradient boosting methods and their scalable implementations, |
Line 97: | Line 104: | ||
**CSC_54445_EP - Geometric Algorithms for Point Patterns and 2D embedded Structures, Across Applications in Visual Computing (24h, 2 ECTS), Pooran Memari** (contact: pooran.memari@polytechnique.edu) - Elective course | **CSC_54445_EP - Geometric Algorithms for Point Patterns and 2D embedded Structures, Across Applications in Visual Computing (24h, 2 ECTS), Pooran Memari** (contact: pooran.memari@polytechnique.edu) - Elective course | ||
- | > (To be added) | + | > This course explores the key role of 2D embedded structures, namely discrete distributions or point patterns in practical applications, |
**CSC_54442_EP - Socio-emotional embodied conversational agents (24h, 2 ECTS), Chloé Clavel (Inria Paris) and Brian Ravenet (Université Paris Saclay)** (contacts: chloe.clavel@inria.fr and brian.ravenet@lisn.upsaclay.fr> | **CSC_54442_EP - Socio-emotional embodied conversational agents (24h, 2 ECTS), Chloé Clavel (Inria Paris) and Brian Ravenet (Université Paris Saclay)** (contacts: chloe.clavel@inria.fr and brian.ravenet@lisn.upsaclay.fr> | ||
Line 126: | Line 133: | ||
==== Period 3 ==== | ==== Period 3 ==== | ||
- | MAP/ | + | INT_54490_EP |
curriculum.1752759387.txt.gz · Last modified: 2025/07/17 13:36 by respai-vic