grandes-ecoles

Papers (191)
2025
centrale-maths1__official 40 centrale-maths2__official 42 mines-ponts-maths1__mp 20 mines-ponts-maths1__pc 21 mines-ponts-maths1__psi 21 mines-ponts-maths2__mp 28 mines-ponts-maths2__pc 24 mines-ponts-maths2__psi 26 polytechnique-maths-a__mp 27 polytechnique-maths__fui 16 polytechnique-maths__pc 27 x-ens-maths-a__mp 18 x-ens-maths-c__mp 9 x-ens-maths-d__mp 38 x-ens-maths__pc 27 x-ens-maths__psi 38
2024
centrale-maths1__official 28 centrale-maths2__official 29 geipi-polytech__maths 9 mines-ponts-maths1__mp 25 mines-ponts-maths1__pc 20 mines-ponts-maths1__psi 19 mines-ponts-maths2__mp 23 mines-ponts-maths2__pc 21 mines-ponts-maths2__psi 21 polytechnique-maths-a__mp 44 polytechnique-maths-b__mp 37 x-ens-maths-a__mp 43 x-ens-maths-b__mp 35 x-ens-maths-c__mp 22 x-ens-maths-d__mp 45 x-ens-maths__pc 24 x-ens-maths__psi 26
2023
centrale-maths1__official 44 centrale-maths2__official 33 e3a-polytech-maths__mp 4 mines-ponts-maths1__mp 15 mines-ponts-maths1__pc 23 mines-ponts-maths1__psi 23 mines-ponts-maths2__mp 22 mines-ponts-maths2__pc 18 mines-ponts-maths2__psi 22 polytechnique-maths__fui 23 x-ens-maths-a__mp 25 x-ens-maths-b__mp 24 x-ens-maths-c__mp 20 x-ens-maths-d__mp 20 x-ens-maths__pc 18 x-ens-maths__psi 15
2022
centrale-maths1__mp 48 centrale-maths1__official 48 centrale-maths1__pc 37 centrale-maths1__psi 43 centrale-maths2__mp 32 centrale-maths2__official 32 centrale-maths2__pc 39 centrale-maths2__psi 45 mines-ponts-maths1__mp 25 mines-ponts-maths1__pc 24 mines-ponts-maths1__psi 24 mines-ponts-maths2__mp 24 mines-ponts-maths2__pc 19 mines-ponts-maths2__psi 20 x-ens-maths-a__mp 13 x-ens-maths-b__mp 40 x-ens-maths-c__mp 27 x-ens-maths-d__mp 46 x-ens-maths1__mp 13 x-ens-maths2__mp 40 x-ens-maths__pc 15 x-ens-maths__pc_cpge 15 x-ens-maths__psi 22 x-ens-maths__psi_cpge 23
2021
centrale-maths1__mp 40 centrale-maths1__official 40 centrale-maths1__pc 36 centrale-maths1__psi 29 centrale-maths2__mp 30 centrale-maths2__official 29 centrale-maths2__pc 38 centrale-maths2__psi 37 x-ens-maths2__mp 39 x-ens-maths__pc 44
2020
centrale-maths1__mp 42 centrale-maths1__official 42 centrale-maths1__pc 36 centrale-maths1__psi 40 centrale-maths2__mp 38 centrale-maths2__official 38 centrale-maths2__pc 40 centrale-maths2__psi 39 mines-ponts-maths1__mp_cpge 24 mines-ponts-maths2__mp_cpge 21 x-ens-maths-a__mp_cpge 18 x-ens-maths-b__mp_cpge 20 x-ens-maths-d__mp 14 x-ens-maths1__mp 18 x-ens-maths2__mp 20 x-ens-maths__pc 18
2019
centrale-maths1__mp 37 centrale-maths1__official 37 centrale-maths1__pc 40 centrale-maths1__psi 39 centrale-maths2__mp 37 centrale-maths2__official 37 centrale-maths2__pc 39 centrale-maths2__psi 49 x-ens-maths1__mp 24 x-ens-maths__pc 18 x-ens-maths__psi 26
2018
centrale-maths1__mp 47 centrale-maths1__official 47 centrale-maths1__pc 41 centrale-maths1__psi 44 centrale-maths2__mp 44 centrale-maths2__official 44 centrale-maths2__pc 35 centrale-maths2__psi 38 x-ens-maths1__mp 19 x-ens-maths2__mp 17 x-ens-maths__pc 22 x-ens-maths__psi 24
2017
centrale-maths1__mp 45 centrale-maths1__official 45 centrale-maths1__pc 22 centrale-maths1__psi 17 centrale-maths2__mp 30 centrale-maths2__official 30 centrale-maths2__pc 28 centrale-maths2__psi 44 x-ens-maths1__mp 26 x-ens-maths2__mp 16 x-ens-maths__pc 18 x-ens-maths__psi 26
2016
centrale-maths1__mp 42 centrale-maths1__pc 31 centrale-maths1__psi 33 centrale-maths2__mp 25 centrale-maths2__pc 47 centrale-maths2__psi 27 x-ens-maths1__mp 18 x-ens-maths2__mp 46 x-ens-maths__pc 15 x-ens-maths__psi 20
2015
centrale-maths1__mp 42 centrale-maths1__pc 18 centrale-maths1__psi 42 centrale-maths2__mp 44 centrale-maths2__pc 18 centrale-maths2__psi 33 x-ens-maths1__mp 16 x-ens-maths2__mp 31 x-ens-maths__pc 30 x-ens-maths__psi 22
2014
centrale-maths1__mp 28 centrale-maths1__pc 26 centrale-maths1__psi 27 centrale-maths2__mp 24 centrale-maths2__pc 26 centrale-maths2__psi 27 x-ens-maths1__mp 9 x-ens-maths2__mp 16 x-ens-maths__pc 4 x-ens-maths__psi 24
2013
centrale-maths1__mp 22 centrale-maths1__pc 45 centrale-maths1__psi 29 centrale-maths2__mp 31 centrale-maths2__pc 52 centrale-maths2__psi 32 x-ens-maths1__mp 24 x-ens-maths2__mp 35 x-ens-maths__pc 22 x-ens-maths__psi 9
2012
centrale-maths1__mp 36 centrale-maths1__pc 28 centrale-maths1__psi 33 centrale-maths2__mp 27 centrale-maths2__psi 18
2011
centrale-maths1__mp 27 centrale-maths1__pc 17 centrale-maths1__psi 24 centrale-maths2__mp 29 centrale-maths2__pc 17 centrale-maths2__psi 10
2010
centrale-maths1__mp 19 centrale-maths1__pc 30 centrale-maths1__psi 13 centrale-maths2__mp 32 centrale-maths2__pc 37 centrale-maths2__psi 27
2023 x-ens-maths__pc

18 maths questions

Q1 Proof Bounding or Estimation Proof View
Let $\mathscr{P}$ be the set of row vectors of size $d$ with non-negative coefficients whose coordinate sum equals 1: $$\mathscr{P} = \left\{ u \in \mathscr{M}_{1,d}\left(\mathbb{R}_{+}\right) : \sum_{j=1}^{d} u_j = 1 \right\}.$$ We consider a square matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ such that for all $i \in \{1,\ldots,d\}$, $$\sum_{j=1}^{d} P_{i,j} = 1$$ We further assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$P_{i,j} \geqslant c\nu_j.$$
Justify that $c \leqslant 1$.
Q2 Matrices Matrix Algebra and Product Properties View
Let $\mathscr{P}$ be the set of row vectors of size $d$ with non-negative coefficients whose coordinate sum equals 1: $$\mathscr{P} = \left\{ u \in \mathscr{M}_{1,d}\left(\mathbb{R}_{+}\right) : \sum_{j=1}^{d} u_j = 1 \right\}.$$ We consider a square matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ such that for all $i \in \{1,\ldots,d\}$, $$\sum_{j=1}^{d} P_{i,j} = 1$$ We further assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$P_{i,j} \geqslant c\nu_j.$$
Show that if $u \in \mathscr{P}$, then $uP \in \mathscr{P}$.
Q3 Matrices Matrix Norm, Convergence, and Inequality View
Let $\mathscr{P}$ be the set of row vectors of size $d$ with non-negative coefficients whose coordinate sum equals 1: $$\mathscr{P} = \left\{ u \in \mathscr{M}_{1,d}\left(\mathbb{R}_{+}\right) : \sum_{j=1}^{d} u_j = 1 \right\}.$$ We consider a square matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ such that for all $i \in \{1,\ldots,d\}$, $$\sum_{j=1}^{d} P_{i,j} = 1$$ We further assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$P_{i,j} \geqslant c\nu_j.$$
Show that for all $u, v \in \mathscr{P}$, $$\|uP - vP\|_1 \leqslant (1-c)\|u - v\|_1.$$ (One may introduce $R = P - cN$ where $N = (n_{i,j})_{1 \leqslant i,j \leqslant d}$ with $n_{i,j} = \nu_j$ for all $1 \leqslant i,j \leqslant d$.)
Q6 Matrices Eigenvalue and Characteristic Polynomial Analysis View
Let $\mathscr{P}$ be the set of row vectors of size $d$ with non-negative coefficients whose coordinate sum equals 1: $$\mathscr{P} = \left\{ u \in \mathscr{M}_{1,d}\left(\mathbb{R}_{+}\right) : \sum_{j=1}^{d} u_j = 1 \right\}.$$ We consider a square matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ such that for all $i \in \{1,\ldots,d\}$, $$\sum_{j=1}^{d} P_{i,j} = 1$$ We further assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$P_{i,j} \geqslant c\nu_j.$$
Show that there exists a unique element $\mu$ of $\mathscr{P}$ such that $\mu P = \mu$.
Q7 Matrices Matrix Power Computation and Application View
Let $\mathscr{P}$ be the set of row vectors of size $d$ with non-negative coefficients whose coordinate sum equals 1: $$\mathscr{P} = \left\{ u \in \mathscr{M}_{1,d}\left(\mathbb{R}_{+}\right) : \sum_{j=1}^{d} u_j = 1 \right\}.$$ We consider a square matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ such that for all $i \in \{1,\ldots,d\}$, $$\sum_{j=1}^{d} P_{i,j} = 1$$ We further assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$P_{i,j} \geqslant c\nu_j.$$ Let $\mu$ be the unique element of $\mathscr{P}$ such that $\mu P = \mu$.
Show that for all $n \in \mathbb{N}$ and all $x \in \mathscr{P}$, $$\left\| xP^n - \mu \right\|_1 \leqslant 2(1-c)^n.$$
Q8 Matrices Matrix Entry and Coefficient Identities View
Let $M \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$. We assume that the matrix $M$ has an eigenvalue $\lambda > 0$ and that there exists $h \in \mathscr{M}_{d,1}\left(\mathbb{R}_{+}^{*}\right)$ a column vector such that: $$Mh = \lambda h.$$ We also assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$M_{i,j} \geqslant c\nu_j.$$ We introduce the matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ defined for $1 \leqslant i,j \leqslant d$ by $$P_{i,j} = \frac{M_{i,j} h_j}{\lambda h_i}.$$
Justify that for all $i \in \{1,\ldots,d\}$, $\displaystyle\sum_{j=1}^{d} P_{i,j} = 1$.
Q9 Matrices Matrix Power Computation and Application View
Let $M \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$. We assume that the matrix $M$ has an eigenvalue $\lambda > 0$ and that there exists $h \in \mathscr{M}_{d,1}\left(\mathbb{R}_{+}^{*}\right)$ a column vector such that: $$Mh = \lambda h.$$ We also assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$M_{i,j} \geqslant c\nu_j.$$ We introduce the matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ defined for $1 \leqslant i,j \leqslant d$ by $$P_{i,j} = \frac{M_{i,j} h_j}{\lambda h_i}.$$
Let $n \geqslant 1$. Give an expression for the coefficients of $P^n$ in terms of the coefficients of $M^n$, $h$ and $\lambda$.
Q10 Matrices Matrix Norm, Convergence, and Inequality View
Let $M \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$. We assume that the matrix $M$ has an eigenvalue $\lambda > 0$ and that there exists $h \in \mathscr{M}_{d,1}\left(\mathbb{R}_{+}^{*}\right)$ a column vector such that: $$Mh = \lambda h.$$ We also assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$M_{i,j} \geqslant c\nu_j.$$ We introduce the matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ defined for $1 \leqslant i,j \leqslant d$ by $$P_{i,j} = \frac{M_{i,j} h_j}{\lambda h_i}.$$
(a) Show that there exist $\mu \in \mathscr{P}$, $C > 0$ and $\gamma \in [0,1[$, such that $\mu P = \mu$ and for all $n \geqslant 0$, $$\sum_{i=1}^{d} \sum_{j=1}^{d} \left| \lambda^{-n} \left(M^n\right)_{i,j} - h_i \frac{\mu_j}{h_j} \right| \leqslant C\gamma^n.$$
(b) Prove that there exists a unique $\pi \in \mathscr{P}$ such that $\pi M = \lambda \pi$.
Q11 Roots of polynomials Existence or counting of roots with specified properties View
Consider $(c_0, \ldots, c_{d-1}) \in \left(\mathbb{R}_{+}^{*}\right)^d$ and $P$ the polynomial $$X^d - c_{d-1} X^{d-1} - \cdots - c_1 X - c_0.$$ Show that the polynomial $P$ has a unique root in $\mathbb{R}_{+}^{*}$.
Q12 Invariant lines and eigenvalues and vectors Compute eigenvalues of a given matrix View
Consider $a = (a_1, \ldots, a_d) \in \left(\mathbb{R}_{+}^{*}\right)^d$ and $b = (b_1, \ldots, b_{d-1}) \in \left(\mathbb{R}_{+}^{*}\right)^{d-1}$ and introduce the matrix $$M = \begin{pmatrix} a_1 & b_1 & 0 & \ldots & 0 & 0 \\ a_2 & 0 & b_2 & \ldots & 0 & 0 \\ a_3 & 0 & 0 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ a_{d-1} & 0 & 0 & \ldots & 0 & b_{d-1} \\ a_d & 0 & 0 & \ldots & 0 & 0 \end{pmatrix}.$$
(a) Justify that there exists a unique pair $(\lambda, \pi) \in \mathbb{R}_{+}^{*} \times \mathscr{P}$ such that $\pi M = \lambda \pi$. Express $\pi$ explicitly in terms of $a$, $b$ and $\lambda$.
(b) Show that there exists a unique $h \in \mathscr{M}_{d,1}\left(\mathbb{R}_{+}^{*}\right)$ such that $\langle \pi, h \rangle = 1$ and $$Mh = \lambda h.$$
(c) Deduce that the sequence $\left(\lambda^{-n} M^n\right)_{n \geqslant 1}$ converges as $n$ tends to infinity and give an expression for its limit in terms of $h$ and $\mu$.
Q13 Discrete Random Variables Expectation and Variance of Sums of Independent Variables View
We assume that for all $i,j \in \{1,\ldots,d\}$, $N_{i,j}$ is a random variable taking values in $\mathbb{N}$ and such that $N_{i,j}^2$ has finite expectation. For all $i \in \{1,\ldots,d\}$, we introduce the random variable $L_i = (N_{i,1}, \ldots, N_{i,d})$. We consider a family of independent random variables $(L_i^{n,k})_{n \geqslant 1, k \geqslant 1}$ where for all $i$, $n$, $k$, $L_i^{n,k}$ has the same distribution as $L_i$. Let $X_0 = (X_{0,i})_{1 \leqslant i \leqslant d}$ be a random variable taking values in $\mathscr{M}_{1,d}(\mathbb{N})$. We define by recursion for $n \geqslant 0$: $$X_{n+1} = \sum_{i=1}^{d} \sum_{k=1}^{X_{n,i}} L_i^{n,k}.$$ We introduce $M \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ defined by $M_{i,j} = \mathbb{E}(N_{i,j})$ and $x_{n,j} = \mathbb{E}(X_{n,j})$.
(a) Show that, for all $y \in \mathscr{M}_{1,d}(\mathbb{N})$ and $1 \leqslant j \leqslant d$, $$\mathbb{E}\left(X_{n+1,j} \mathbf{1}_{X_n = y}\right) = (yM)_j \mathbb{P}(X_n = y).$$ (One may use without proof the fact that the random variables $L_i^{n,k}$ and $\mathbf{1}_{X_n = y}$ are independent.)
(b) Deduce that, for all $n \geqslant 0$, $$x_{n+1} = x_n M.$$
Q14 Discrete Random Variables Expectation and Variance of Sums of Independent Variables View
Let $\mathscr{I}$ be a finite set and $(Y_i)_{i \in \mathscr{I}}$ be a family of random variables that are pairwise independent, take real values and whose squares have finite expectation. Show that $$\mathbb{E}\left(\left(\sum_{i \in \mathscr{I}} Y_i\right)^2\right) = \left(\sum_{i \in \mathscr{I}} \mathbb{E}(Y_i)\right)^2 + \sum_{i \in \mathscr{I}} \operatorname{Var}(Y_i).$$
Q15 Discrete Random Variables Expectation and Variance of Sums of Independent Variables View
We use the setup of the third part. For $u \in \mathscr{M}_{d,1}(\mathbb{R})$, we denote $T(u) = (T_i(u))_{1 \leqslant i \leqslant d} \in \mathbb{R}^d$ the vector defined by $$T_i(u) = \operatorname{Var}\left(\langle L_i, u \rangle\right) \quad \text{for } i \in \{1,\ldots,d\}.$$
(a) Show that for all $u \in \mathscr{M}_{d,1}(\mathbb{R})$, $y \in \mathscr{M}_{1,d}(\mathbb{N})$ and $n \geqslant 0$, $$\mathbb{E}\left(\langle X_{n+1}, u \rangle^2 \mathbf{1}_{X_n = y}\right) = \mathbb{P}(X_n = y)\left(\langle y, Mu \rangle^2 + \langle y, T(u) \rangle\right).$$ (One may use without proof the fact that, for all $n \geqslant 0$, the random variables $\sum_{j=1}^{d} u_j L_{i,j}^{n,k} \mathbf{1}_{X_n = y}$ are pairwise independent when $k$ and $i$ vary.)
(b) Show that for all $u \in \mathscr{M}_{d,1}(\mathbb{R})$ and $n \geqslant 0$, $$\mathbb{E}\left(\langle X_{n+1}, u \rangle^2\right) = \mathbb{E}\left(\langle X_n, Mu \rangle^2\right) + \langle x_0 M^n, T(u) \rangle.$$
Q16 Discrete Random Variables Expectation and Variance of Sums of Independent Variables View
We use the setup of the third part. For $u \in \mathscr{M}_{d,1}(\mathbb{R})$, we denote $T(u) = (T_i(u))_{1 \leqslant i \leqslant d} \in \mathbb{R}^d$ the vector defined by $$T_i(u) = \operatorname{Var}\left(\langle L_i, u \rangle\right) \quad \text{for } i \in \{1,\ldots,d\}.$$
Show that for all $n \geqslant 0$, $$\mathbb{E}\left(\langle X_n, u \rangle^2\right) = \mathbb{E}\left(\langle X_0, M^n u \rangle^2\right) + \sum_{k=0}^{n-1} \langle x_0 M^k, T\left(M^{n-1-k} u\right) \rangle$$ (with the convention that the sum indexed by $k$ is zero if $n = 0$).
Q17 Discrete Random Variables Monotonicity and Convergence of Sequences Defined via Expectations View
We use the notations of the previous parts. We assume that there exists an eigenvalue $\lambda > 0$ and an associated eigenvector column $h \in \mathscr{M}_{d,1}\left(\mathbb{R}_{+}^{*}\right)$: $$Mh = \lambda h,$$ and that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$M_{i,j} \geqslant c\nu_j.$$
Show that there exist $\pi \in \mathscr{P}$ and $h' \in \mathscr{M}_{d,1}\left(\mathbb{R}_{+}^{*}\right)$ and $C > 0$ and $\gamma \in [0,1[$, such that $\pi M = \lambda \pi$ and for all $n \geqslant 0$, $$\sum_{i=1}^{d} \sum_{j=1}^{d} \left| \lambda^{-n} \left(M^n\right)_{i,j} - h_i' \pi_j \right| \leqslant C\gamma^n.$$
Q20 Discrete Random Variables Probability Bounds and Inequalities for Discrete Variables View
We use the notations of the previous parts. We assume that there exists an eigenvalue $\lambda > 0$ and an associated eigenvector column $h \in \mathscr{M}_{d,1}\left(\mathbb{R}_{+}^{*}\right)$ with $Mh = \lambda h$, and that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that $M_{i,j} \geqslant c\nu_j$ for all $i,j$. Let $\pi \in \mathscr{P}$ be such that $\pi M = \lambda \pi$, and let $C > 0$, $\gamma \in [0,1[$ be as in question 17.
(a) Show that for all $n \geqslant 0$ and $u \in \mathscr{M}_{d,1}(\mathbb{R})$ such that $\langle u, \pi \rangle = 0$, $$\left\| M^n u \right\|_1 \leqslant C(\lambda\gamma)^n \|u\|_1.$$
(b) Deduce that there exists $C_1 \geqslant 0$ such that for all $n \geqslant 0$ and $u \in \mathscr{M}_{d,1}(\mathbb{R})$ column vector such that $\langle u, \pi \rangle = 0$, $$\mathbb{E}\left(\langle X_n, u \rangle^2\right) \leqslant C_1 \|u\|_1^2 \left(\lambda^{2n} \left(\sum_{k=0}^{n-1} \lambda^{-k} \gamma^{2n-2k}\right) + (\lambda\gamma)^{2n}\right).$$
Q21 Discrete Random Variables Monotonicity and Convergence of Sequences Defined via Expectations View
We suppose in the rest of this part that $\lambda > 1$ and we introduce the random row vector $$W_n = \lambda^{-n}\left(X_n - \|X_n\|_1 \pi\right).$$
(a) Show that the series $\displaystyle\sum_{n \geqslant 1} \left(\sum_{k=0}^{n-1} \lambda^{-k} \gamma^{2n-2k}\right)$ converges.
(b) Let $w \in (\mathbb{R}_{+})^d$ and let $e_0 = (1,\ldots,1)$. Show that $$\left\langle w - \|w\|_1 \pi, \pi \right\rangle = \left\langle w, \pi - \langle \pi, \pi \rangle e_0 \right\rangle$$ and that the vector $\pi - \langle \pi, \pi \rangle e_0$ is orthogonal to $\pi$.
(c) Show that the series $\displaystyle\sum_{n \geqslant 0} \mathbb{E}\left(\|W_n\|_2^2\right)$ is convergent. Deduce that the sequence $\left(\mathbb{E}\left(\|W_n\|_2^2\right)\right)_{n \geqslant 0}$ tends to $0$. (One may for example decompose $X_n$ in a well-chosen orthonormal basis of $\mathbb{R}^d$.)
(d) Show that for all $\varepsilon > 0$, $$\lim_{n \rightarrow \infty} \mathbb{P}\left(\|W_n\|_2 \geqslant \varepsilon\right) = 0.$$
Q22 Discrete Random Variables Probability Bounds and Inequalities for Discrete Variables View
We suppose that $\lambda > 1$ and we use the random row vector $W_n = \lambda^{-n}\left(X_n - \|X_n\|_1 \pi\right)$.
Show that the event $\left\{\lim_{n \rightarrow +\infty} W_n = 0_{\mathbb{R}^d}\right\}$ is almost surely true. (One may begin by computing the probability of the event $$\left\{ \forall m \geqslant 0, \exists k \geqslant m \mid \|W_k\|_2 \geqslant \varepsilon \right\}$$ for all $\varepsilon > 0$.)