## Learning objectives

Data in machine learning can only be managed and manipulated using matrix operations.

To implement, create, study, and evaluate a model, the use of linear algebra and optimization methods is therefore mandatory.

Consequently, the objectives of this course will be:

- To understand and master the mathematical tools and methods useful for implementing machine learning models and algorithms.
- To know how to reduce data dimensions to facilitate visualization on one hand and to optimize the analysis of high-dimensional data on the other.
- From the theory of data analysis methods (PCA, SVD, etc.) to implementation and Python programming.

## Description of the programme

**Linear Algebra**

This section aims to provide a brief overview of the mathematical tools required for the manipulation, visualisation and analysis of data, as well as the identification of their most significant components. It encompasses solving linear problems and applying matrix factorisation techniques.

*Content*

A reminder of the matrix operations required to solve an inverse problem (norms, inversion, diagonalisation, etc.).

The factorisation and reduction of dimensions methods useful in machine learning (LU decomposition, SVD, QR, etc.).

Exercises

**Probability and statistics:**

The aim of this section is to review the fundamental concepts of measurement theory, probability theory and statistics in order to understand the important mathematical issues involved in statistical learning.

*Content:*

The basics of measure theory:

Probability calculations, conditional probability, variance, standard deviation, covariance, correlation

Time series, ARMA models

The curse of dimensionality and concentration issues

Principal component analysis (PCA)

Maximum likelihood estimator

Exercises

**Optimisation:**

This section aims to provide a foundation in the mathematical tools used in parametric statistical optimisation, with a particular emphasis on gradient descent technique.

*Content:*

Function study: derivability, convexity, concavity

Gradient descent algorithm

Gradient descent algorithm implementation in Python

Exercises

**Applications and examples:**

Examples of previous applications in:

Inverse problem

Linear regression

Least squares estimators

Multivariate linear regression

Yule-Walker estimator for ARMA models

Gradient descent algorithm application

## Generic central skills and knowledge targeted in the discipline

With the Mathematics for AI course, students will be able to:

- study the descriptive statistics of data (mean, standard deviation, variance, correlation, covariance)

- decompose high-dimensional data and project them into a smaller space (reduce dimensions using SVD, PCA, etc.).

- minimise an objective function in order to estimate a solution to an ill-posed inverse problem (linear regression, multivariate linear regression)

- programme optimisation algorithms (gradient descent algorithm)

## How knowledge is tested

- Exercises and implementation of algorithms in Python at the end of each session.

## Bibliography

Algèbre Linéaire – Mansuy, Mneimné

Algèbre Linéaire – Grifone

Algèbre - Gourdon

Modélisation stochastique et simulation – Bercu, Chafaï

Probabilités 1,2 – Ouvrard (difficult)

Probabilités – Barbé, Ledoux

Learning Theory from First Principles, slides de cours – Bach

Optimisation et analyse convexe – Hiriart Urruty

## Teaching team

- Mira SHEVCHENKO

- Total hours of teaching22h
- 22h