User Tools

Site Tools


curriculum

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
curriculum [2025/06/18 11:46] respai-viccurriculum [2025/06/23 11:42] (current) respai-vic
Line 76: Line 76:
 **CSC_53432_EP - Large Language Models (24h, 2 ECTS), Guokan Shang (MBZUAI) ** (contact: guokan.shang@mbzuai.ac.ae) **CSC_53432_EP - Large Language Models (24h, 2 ECTS), Guokan Shang (MBZUAI) ** (contact: guokan.shang@mbzuai.ac.ae)
 > This course offers a deep dive into Large Language Models (LLMs), blending essential theory with hands-on labs to develop both practical skills and conceptual understanding—preparing you for roles in LLM development and deployment. > This course offers a deep dive into Large Language Models (LLMs), blending essential theory with hands-on labs to develop both practical skills and conceptual understanding—preparing you for roles in LLM development and deployment.
-> The curriculum begins with a brief overview of key historical NLP techniques. It then transitions to the transformer architecture, focusing on its attention mechanism and tokenization—the core of modern LLMs. Pre-training objectives such as masked/denoising language modeling and causal language modeling will also be covered, forming the basis for models like BERT, GPT, and T5. +> The curriculum begins with a brief overview of key historical NLP techniques. It then transitions to the transformer architecture, focusing on its attention mechanism and tokenization—the core of modern LLMs. Pre-training objectives such as masked/denoising language modeling and causal language modeling will also be covered, forming the basis for models like BERT, GPT, and T5. The course then examines LLM post-training techniques used to refine pre-trained models, including instruction tuning (SFT), reinforcement learning from human feedback (e.g., PPO/DPO), and reinforcement learning from verifiable rewards (e.g., GRPO). Finally, the course will address LLM application and future directions—including RAG, agents, multimodality, and alternative model architectures.
-The course then examines LLM post-training techniques used to refine pre-trained models, including instruction tuning (SFT), reinforcement learning from human feedback (e.g., PPO/DPO), and reinforcement learning from verifiable rewards (e.g., GRPO). +
-Finally, the course will address LLM application and future directions—including RAG, agents, multimodality, and alternative model architectures.+
  
 **CSC_53439_EP - Deep Reinforcement Learning (24h, 2 ECTS), Jesse Read** (contact: jesse.read@polytechnique.edu) **CSC_53439_EP - Deep Reinforcement Learning (24h, 2 ECTS), Jesse Read** (contact: jesse.read@polytechnique.edu)
Line 105: Line 103:
  
 **CSC_54434_EP - 3D Computer Vision (24h, 2 ECTS), Xi Wang (EP)** (contact: Xi.Wang@polytechnique.edu) **CSC_54434_EP - 3D Computer Vision (24h, 2 ECTS), Xi Wang (EP)** (contact: Xi.Wang@polytechnique.edu)
-> (ABSTRACT TO BE ADDED)+This course presents modern 3D computer vision in a clear, step-by-step progression: we begin with classical multi-view reconstruction and structure‑from‑motion pipelines, then advance to neural implicit representations for novel‑view synthesis (e.g., NeRF), proceed to explicit geometry rendering via 3D Gaussian Splatting (3DGS), and finally explore generative 3D models: e.g., 3D generation guided by 2D generative score distillation (DreamFusion-like), data‑driven 3D/4D content generation for dynamic scenes and motion, and the latest video generation techniques. 
  
 **CSC_54443_EP - Soft robots: Design, Modeling, Simulation and Control (24h, 2 ECTS),  Christian Duriez (Inria Lille)** (contact: christian.duriez@inria.fr) **CSC_54443_EP - Soft robots: Design, Modeling, Simulation and Control (24h, 2 ECTS),  Christian Duriez (Inria Lille)** (contact: christian.duriez@inria.fr)
curriculum.1750247216.txt.gz · Last modified: 2025/06/18 11:46 by respai-vic

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki