LFM Pure

View all 1553 questions →

grandes-ecoles 2023 Q2 Structured Matrix Characterization View
Show that $S _ { n } ^ { + } ( \mathbf { R } )$ and $S _ { n } ^ { + + } ( \mathbf { R } )$ are convex subsets of $M _ { n } ( \mathbf { R } )$. Are they vector subspaces of $M _ { n } ( \mathbf { R } )$ ?
grandes-ecoles 2023 Q2 Projection and Orthogonality View
Let $A \in \mathcal{M}_{m,n}(\mathbb{R})$ be a matrix with $m$ rows and $n$ columns. We denote by $\langle u, v \rangle_{\mathbb{R}^n}$ the inner product between two vectors $u$ and $v$ of $\mathbb{R}^n$ and $\langle \mu, \nu \rangle_{\mathbb{R}^m}$ that between two vectors $\mu$ and $\nu$ of $\mathbb{R}^m$.
(a) Show that for all $(x, \nu) \in \mathbb{R}^n \times \mathbb{R}^m$, we have $$\langle Ax, \nu \rangle_{\mathbb{R}^m} = \left\langle x, A^\top \nu \right\rangle_{\mathbb{R}^n},$$ where $A^\top$ denotes the transpose matrix of $A$.
(b) Deduce that $\ker A \subset (\operatorname{Im} A^\top)^\perp$ where $E^\perp$ denotes the orthogonal complement of $E$ for the inner product on $\mathbb{R}^n$ for any vector subspace $E$ of $\mathbb{R}^n$.
(c) Show that $\ker A = (\operatorname{Im} A^\top)^\perp$.
grandes-ecoles 2023 Q2 Matrix Algebra and Product Properties View
Let $\mathscr{P}$ be the set of row vectors of size $d$ with non-negative coefficients whose coordinate sum equals 1: $$\mathscr{P} = \left\{ u \in \mathscr{M}_{1,d}\left(\mathbb{R}_{+}\right) : \sum_{j=1}^{d} u_j = 1 \right\}.$$ We consider a square matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ such that for all $i \in \{1,\ldots,d\}$, $$\sum_{j=1}^{d} P_{i,j} = 1$$ We further assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$P_{i,j} \geqslant c\nu_j.$$
Show that if $u \in \mathscr{P}$, then $uP \in \mathscr{P}$.
grandes-ecoles 2023 Q3 Matrix Norm, Convergence, and Inequality View
Show that it is a sub-multiplicative norm, that is: $$\forall ( u , v ) \in \mathcal { L } ( E ) ^ { 2 } , \quad \| u v \| \leqslant \| u \| \cdot \| v \| ,$$ and deduce a bound on $\left\| u ^ { k } \right\|$, for any natural number $k$, in terms of $\| u \|$ and the integer $k$.
grandes-ecoles 2023 Q3 Matrix Decomposition and Factorization View
Show that, if $A \in S_n^{++}(\mathrm{R})$, there exists $S \in S_n^{++}(\mathrm{R})$ such that $A = S^2$.
grandes-ecoles 2023 Q3 Matrix Decomposition and Factorization View
Show that, if $A \in S _ { n } ^ { + + } ( \mathbf { R } )$, there exists $S \in S _ { n } ^ { + + } ( \mathbf { R } )$ such that $A = S ^ { 2 }$.
grandes-ecoles 2023 Q3 Linear System and Inverse Existence View
Consider an open set $U \subset \mathbb{R}^n$, $h : U \rightarrow \mathbb{R}$ a $\mathcal{C}^1$ application and $b \in \mathbb{R}^m$. Assume that there exists $x_* \in U$ a minimum of $h$ on the set $V_b = \{x \in U \mid Ax + b = 0\}$.
(a) Show that for all $u \in \mathbb{R}^n$ such that $Au = 0$ we have $\left\langle \nabla h(x_*), u \right\rangle_{\mathbb{R}^n} = 0$ where $\nabla h(x)$ denotes the gradient of $h$ at $x$.
(b) Show the existence of $\nu_* \in \mathbb{R}^m$ such that $\nabla h(x_*) - A^T \nu_* = 0$.
(c) Deduce that the application $L : U \times \mathbb{R}^m \rightarrow \mathbb{R}$ such that $L(x, \nu) = h(x) - \langle \nu, Ax + b \rangle_{\mathbb{R}^m}$ satisfies $\frac{\partial L}{\partial x_k}(x_*, \nu_*) = 0$ for all $1 \leq k \leq n$ where $\frac{\partial L}{\partial x_k}(x, \nu)$ denotes the partial derivative of $L$ with respect to the $k$-th coordinate of $x \in \mathbb{R}^n$.
(d) Conclude that if $U$ is convex, and $h$ is convex on $U$, then $L$ admits a saddle point at $(x_*, \nu_*)$, that is, we have $$L(x_*, \nu) \leq L(x_*, \nu_*) \leq L(x, \nu_*)$$ for all $(x, \nu) \in U \times \mathbb{R}^m$.
grandes-ecoles 2023 Q3 Matrix Norm, Convergence, and Inequality View
Let $\mathscr{P}$ be the set of row vectors of size $d$ with non-negative coefficients whose coordinate sum equals 1: $$\mathscr{P} = \left\{ u \in \mathscr{M}_{1,d}\left(\mathbb{R}_{+}\right) : \sum_{j=1}^{d} u_j = 1 \right\}.$$ We consider a square matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ such that for all $i \in \{1,\ldots,d\}$, $$\sum_{j=1}^{d} P_{i,j} = 1$$ We further assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$P_{i,j} \geqslant c\nu_j.$$
Show that for all $u, v \in \mathscr{P}$, $$\|uP - vP\|_1 \leqslant (1-c)\|u - v\|_1.$$ (One may introduce $R = P - cN$ where $N = (n_{i,j})_{1 \leqslant i,j \leqslant d}$ with $n_{i,j} = \nu_j$ for all $1 \leqslant i,j \leqslant d$.)
grandes-ecoles 2023 Q6 Eigenvalue and Characteristic Polynomial Analysis View
Let $\mathscr{P}$ be the set of row vectors of size $d$ with non-negative coefficients whose coordinate sum equals 1: $$\mathscr{P} = \left\{ u \in \mathscr{M}_{1,d}\left(\mathbb{R}_{+}\right) : \sum_{j=1}^{d} u_j = 1 \right\}.$$ We consider a square matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ such that for all $i \in \{1,\ldots,d\}$, $$\sum_{j=1}^{d} P_{i,j} = 1$$ We further assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$P_{i,j} \geqslant c\nu_j.$$
Show that there exists a unique element $\mu$ of $\mathscr{P}$ such that $\mu P = \mu$.
grandes-ecoles 2023 Q7 Linear Transformation and Endomorphism Properties View
In this part, $a$ denotes an endomorphism of $\mathbf { C } ^ { n }$. We use the decomposition $\mathbf { C } ^ { n } = \bigoplus _ { i = 1 } ^ { r } E _ { i }$ where $E _ { i } = \operatorname { Ker } \left( a - \lambda _ { i } id _ { \mathbf { C } ^ { n } } \right) ^ { m _ { i } }$, with the projections $p_i$ and inclusions $q_i$ as defined.
Let $( i , j ) \in \llbracket 1 ; r \rrbracket ^ { 2 }$. Express $p _ { i } q _ { j }$ and then $\sum _ { i = 1 } ^ { r } q _ { i } p _ { i }$ in terms of the endomorphisms $id _ { \mathbf { C } ^ { n } }$ and $id _ { E _ { j } }$.
grandes-ecoles 2023 Q7 Matrix Norm, Convergence, and Inequality View
Let $M \in S_n^+(\mathbf{R})$ be a non-zero matrix. You may use without proving it the inequality: $$\forall (x_1, \ldots, x_n) \in (\mathbf{R}_+)^n, \quad 2\max\{x_1, \ldots, x_n\}\left(\frac{1}{n}\sum_{k=1}^n x_k - \prod_{k=1}^n x_k^{1/n}\right) \geq \frac{1}{n}\sum_{k=1}^n \left(x_k - \prod_{j=1}^n x_j^{1/n}\right)^2.$$ Deduce that $$\frac{\operatorname{Tr}(M)}{n} - \operatorname{det}^{1/n}(M) \geq \frac{\left\|M - \operatorname{det}^{1/n}(M) I_n\right\|_2^2}{2n\|M\|_2}.$$
grandes-ecoles 2023 Q7 Matrix Norm, Convergence, and Inequality View
Let $M \in S _ { n } ^ { + } ( \mathbf { R } )$ be a non-zero matrix. In the rest of this part, you may use without proof the inequality below $\forall \left( x _ { 1 } , \ldots , x _ { n } \right) \in \left( \mathbf { R } _ { + } \right) ^ { n }$,
$$2 \max \left\{ x _ { 1 } , \ldots , x _ { n } \right\} \left( \frac { 1 } { n } \sum _ { k = 1 } ^ { n } x _ { k } - \prod _ { k = 1 } ^ { n } x _ { k } ^ { 1 / n } \right) \geq \frac { 1 } { n } \sum _ { k = 1 } ^ { n } \left( x _ { k } - \prod _ { j = 1 } ^ { n } x _ { j } ^ { 1 / n } \right) ^ { 2 }$$
Deduce that
$$\frac { \operatorname { Tr } ( M ) } { n } - \operatorname { det } ^ { 1 / n } ( M ) \geq \frac { \left\| M - \operatorname { det } ^ { 1 / n } ( M ) I _ { n } \right\| _ { 2 } ^ { 2 } } { 2 n \| M \| _ { 2 } }$$
grandes-ecoles 2023 Q7 Matrix Power Computation and Application View
Let $\mathscr{P}$ be the set of row vectors of size $d$ with non-negative coefficients whose coordinate sum equals 1: $$\mathscr{P} = \left\{ u \in \mathscr{M}_{1,d}\left(\mathbb{R}_{+}\right) : \sum_{j=1}^{d} u_j = 1 \right\}.$$ We consider a square matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ such that for all $i \in \{1,\ldots,d\}$, $$\sum_{j=1}^{d} P_{i,j} = 1$$ We further assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$P_{i,j} \geqslant c\nu_j.$$ Let $\mu$ be the unique element of $\mathscr{P}$ such that $\mu P = \mu$.
Show that for all $n \in \mathbb{N}$ and all $x \in \mathscr{P}$, $$\left\| xP^n - \mu \right\|_1 \leqslant 2(1-c)^n.$$
grandes-ecoles 2023 Q8 Linear Transformation and Endomorphism Properties View
In this part, $a$ denotes an endomorphism of $\mathbf { C } ^ { n }$. We use the decomposition $\mathbf { C } ^ { n } = \bigoplus _ { i = 1 } ^ { r } E _ { i }$ where $E _ { i } = \operatorname { Ker } \left( a - \lambda _ { i } id _ { \mathbf { C } ^ { n } } \right) ^ { m _ { i } }$, with the projections $p_i$, inclusions $q_i$, and $a_i = p_i a q_i$ the endomorphism of $E_i$.
Show that: $a = \sum _ { i = 1 } ^ { r } q _ { i } a _ { i } p _ { i }$.
grandes-ecoles 2023 Q8 Matrix Decomposition and Factorization View
Let $A \in S_n^{++}(\mathbf{R})$ and $B \in S_n(\mathbf{R})$. Show that there exists a diagonal matrix $D \in M_n(\mathbf{R})$ and $Q \in GL_n(\mathbf{R})$ such that $B = QDQ^\top$ and $A = QQ^\top$. What can be said about the diagonal elements of $D$ if $B \in S_n^{++}(\mathbf{R})$?
Hint: You may use question 3.
grandes-ecoles 2023 Q8 Matrix Decomposition and Factorization View
Let $A \in S _ { n } ^ { + + } ( \mathbf { R } )$ and $B \in S _ { n } ( \mathbf { R } )$. Show that there exists a diagonal matrix $D \in M _ { n } ( \mathbf { R } )$ and $Q \in G L _ { n } ( \mathbf { R } )$ such that $B = Q D Q ^ { \top }$ and $A = Q Q ^ { \top }$. What can be said about the diagonal elements of $D$ if $B \in S _ { n } ^ { + + } ( \mathbf { R } )$ ?
Hint: You may use question 3.
grandes-ecoles 2023 Q8 Matrix Entry and Coefficient Identities View
Let $M \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$. We assume that the matrix $M$ has an eigenvalue $\lambda > 0$ and that there exists $h \in \mathscr{M}_{d,1}\left(\mathbb{R}_{+}^{*}\right)$ a column vector such that: $$Mh = \lambda h.$$ We also assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$M_{i,j} \geqslant c\nu_j.$$ We introduce the matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ defined for $1 \leqslant i,j \leqslant d$ by $$P_{i,j} = \frac{M_{i,j} h_j}{\lambda h_i}.$$
Justify that for all $i \in \{1,\ldots,d\}$, $\displaystyle\sum_{j=1}^{d} P_{i,j} = 1$.
grandes-ecoles 2023 Q9 Matrix Power Computation and Application View
Let $M \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$. We assume that the matrix $M$ has an eigenvalue $\lambda > 0$ and that there exists $h \in \mathscr{M}_{d,1}\left(\mathbb{R}_{+}^{*}\right)$ a column vector such that: $$Mh = \lambda h.$$ We also assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$M_{i,j} \geqslant c\nu_j.$$ We introduce the matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ defined for $1 \leqslant i,j \leqslant d$ by $$P_{i,j} = \frac{M_{i,j} h_j}{\lambda h_i}.$$
Let $n \geqslant 1$. Give an expression for the coefficients of $P^n$ in terms of the coefficients of $M^n$, $h$ and $\lambda$.
grandes-ecoles 2023 Q10 Matrix Norm, Convergence, and Inequality View
Let $M \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$. We assume that the matrix $M$ has an eigenvalue $\lambda > 0$ and that there exists $h \in \mathscr{M}_{d,1}\left(\mathbb{R}_{+}^{*}\right)$ a column vector such that: $$Mh = \lambda h.$$ We also assume that there exist $\nu \in \mathscr{P}$ and $c > 0$ such that for all $i,j \in \{1,\ldots,d\}$, $$M_{i,j} \geqslant c\nu_j.$$ We introduce the matrix $P \in \mathscr{M}_d\left(\mathbb{R}_{+}\right)$ defined for $1 \leqslant i,j \leqslant d$ by $$P_{i,j} = \frac{M_{i,j} h_j}{\lambda h_i}.$$
(a) Show that there exist $\mu \in \mathscr{P}$, $C > 0$ and $\gamma \in [0,1[$, such that $\mu P = \mu$ and for all $n \geqslant 0$, $$\sum_{i=1}^{d} \sum_{j=1}^{d} \left| \lambda^{-n} \left(M^n\right)_{i,j} - h_i \frac{\mu_j}{h_j} \right| \leqslant C\gamma^n.$$
(b) Prove that there exists a unique $\pi \in \mathscr{P}$ such that $\pi M = \lambda \pi$.
grandes-ecoles 2023 Q11 Eigenvalue and Characteristic Polynomial Analysis View
We consider a Markov kernel $K$. We assume that 1 is a simple eigenvalue of $K$. We assume that there exists a probability $\pi \in \mathscr{M}_{1,N}(\mathbf{R})$ such that:
(a) For all $j \in \llbracket 1;N \rrbracket$, $\pi[j] \neq 0$.
(b) $\forall (i,j) \in \llbracket 1;N \rrbracket^2$, $\pi[i] K[i,j] = K[j,i] \pi[j]$; we say that $K$ is $\pi$-reversible. Show that $\pi K = \pi$.
grandes-ecoles 2023 Q12 Matrix Algebra and Product Properties View
Show that two shift-invariant endomorphisms of $\mathbb{K}[X]$ commute.
grandes-ecoles 2023 Q12 Matrix Algebra and Product Properties View
For $X, Y \in \mathscr{M}_{N,1}(\mathbf{R})^2$, we define $$\langle X, Y \rangle = \sum_{i=1}^{N} X[i] Y[i] \pi[i]$$ where $\pi \in \mathscr{M}_{1,N}(\mathbf{R})$ is a probability with $\pi[j] \neq 0$ for all $j$. Show that $(X, Y) \mapsto \langle X, Y \rangle$ is an inner product on $\mathscr{M}_{N,1}(\mathbf{R})$.
grandes-ecoles 2023 Q13 Linear Transformation and Endomorphism Properties View
We consider the Euclidean space $E = \mathscr{M}_{N,1}(\mathbf{R})$ equipped with the inner product $\langle X, Y \rangle = \sum_{i=1}^{N} X[i] Y[i] \pi[i]$, where $\pi$ is a $\pi$-reversible probability for the Markov kernel $K$. We consider the endomorphism of $E$ defined by $u : X \mapsto (I_N - K)X$. Show that $\ker(u) = \operatorname{Vect}(U)$ and that $u$ is a self-adjoint endomorphism of $E$.
grandes-ecoles 2023 Q14 Eigenvalue and Characteristic Polynomial Analysis View
We consider the Euclidean space $E = \mathscr{M}_{N,1}(\mathbf{R})$ equipped with the inner product $\langle X, Y \rangle = \sum_{i=1}^{N} X[i] Y[i] \pi[i]$, and the endomorphism $u : X \mapsto (I_N - K)X$ with $q_u(X) = (u(X) \mid X)$. Show that for all $X \in E$, $$q_u(X) = \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{N} (X[i] - X[j])^2 K[i,j] \pi[i]$$ What can be said about the eigenvalues of $u$?
grandes-ecoles 2023 Q18 Determinant and Rank Computation View
Let $A \in S_n^{++}(\mathbf{R})$ and $M \in S_n(\mathbf{R})$. Let the application $f_A$ defined on $\mathbf{R}$ by $$f_A(t) = \operatorname{det}(A + tM).$$ Let $\varepsilon_0 > 0$ be such that for all $t \in ]-\varepsilon_0, \varepsilon_0[, A + tM \in S_n^{++}(\mathrm{R})$. Determine $f_A'(t)$ for all $t \in ]-\varepsilon_0, \varepsilon_0[$.