LFM Pure

View all 1553 questions →

grandes-ecoles 2020 Q35 Matrix Power Computation and Application View
Let $E$ be a $\mathbb{C}$-vector space of dimension $n \geq 1$. Let $u$ be a diagonalizable endomorphism of $E$. Show that $u$ is a permutation endomorphism if and only if there exist natural integers $c_1, \ldots, c_n$ such that, for all $k \in \mathbb{N}$,
$$\operatorname{Tr}\left(u^k\right) = \sum_{\substack{\ell=1 \\ \ell \mid k}}^{n} \ell c_\ell$$
(We sum over the values of $\ell$ dividing $k$ and belonging to $\llbracket 1, n \rrbracket$.)
grandes-ecoles 2020 Q35 Matrix Power Computation and Application View
Let $$B = \frac{1}{4} \left(\begin{array}{cccc} 0 & -5 & 0 & -3 \\ 5 & 0 & 3 & 0 \\ 0 & -3 & 0 & -5 \\ 3 & 0 & 5 & 0 \end{array}\right).$$ Compute $B^{2} \left(\begin{array}{l} 1 \\ 1 \\ 1 \\ 1 \end{array}\right)$.
grandes-ecoles 2020 Q35 Linear Transformation and Endomorphism Properties View
In this subsection, $E$ is a $\mathbb{C}$-vector space of dimension $n \geq 1$. We say that an endomorphism $u$ of $E$ is a permutation endomorphism if there exists a basis $(e_1, \ldots, e_n)$ of $E$ and a permutation $\sigma \in \mathfrak{S}_n$ such that $u(e_j) = e_{\sigma(j)}$ for all $j \in \llbracket 1, n \rrbracket$.
Let $u$ be a diagonalizable endomorphism of $E$. Show that $u$ is a permutation endomorphism if and only if there exist natural integers $c_1, \ldots, c_n$ such that, for all $k \in \mathbb{N}$,
$$\operatorname{Tr}\left(u^k\right) = \sum_{\substack{\ell=1 \\ \ell \mid k}}^{n} \ell c_\ell$$
(We sum over the values of $\ell$ dividing $k$ and belonging to $\llbracket 1, n \rrbracket$.)
grandes-ecoles 2020 Q36 Matrix Decomposition and Factorization View
Let $$B = \frac{1}{4} \left(\begin{array}{cccc} 0 & -5 & 0 & -3 \\ 5 & 0 & 3 & 0 \\ 0 & -3 & 0 & -5 \\ 3 & 0 & 5 & 0 \end{array}\right).$$ Determine a real number $a$ and a matrix $P$ such that $$P \in \mathcal{O}_{4}(\mathbb{R}) \cap \mathrm{Sp}_{4}(\mathbb{R}) \quad \text{and} \quad P^{\top} B P = \left(\begin{array}{cccc} 0 & a & 0 & 0 \\ -a & 0 & 0 & 0 \\ 0 & 0 & 0 & 1/a \\ 0 & 0 & -1/a & 0 \end{array}\right).$$
grandes-ecoles 2021 Q1 Matrix Norm, Convergence, and Inequality View
Show that, for every $M$ in $\mathcal{M}_{n}(\mathbb{R})$ and for all $P$ and $Q$ in $\mathcal{O}_{n}(\mathbb{R})$, we have $\|PMQ\|_{F} = \|M\|_{F}$.
grandes-ecoles 2021 Q1 Matrix Norm, Convergence, and Inequality View
Show that, for every $M$ in $\mathcal{M}_{n}(\mathbb{R})$ and for all $P$ and $Q$ in $\mathcal{O}_{n}(\mathbb{R})$, we have $\|PMQ\|_{F} = \|M\|_{F}$.
grandes-ecoles 2021 Q2 Diagonalizability and Similarity View
We denote $D_{A} = \operatorname{diag}\left(\lambda_{1}(A), \ldots, \lambda_{n}(A)\right)$ and $D_{B} = \operatorname{diag}\left(\lambda_{1}(B), \ldots, \lambda_{n}(B)\right)$. Show that there exists an orthogonal matrix $P = \left(p_{i,j}\right)_{1 \leqslant i,j \leqslant n}$ such that $\|A - B\|_{F}^{2} = \left\|D_{A}P - PD_{B}\right\|_{F}^{2}$.
grandes-ecoles 2021 Q2 Diagonalizability and Similarity View
We denote $D_{A} = \operatorname{diag}(\lambda_{1}(A), \ldots, \lambda_{n}(A))$ and $D_{B} = \operatorname{diag}(\lambda_{1}(B), \ldots, \lambda_{n}(B))$. Show that there exists an orthogonal matrix $P = (p_{i,j})_{1 \leqslant i,j \leqslant n}$ such that $\|A - B\|_{F}^{2} = \|D_{A}P - PD_{B}\|_{F}^{2}$.
grandes-ecoles 2021 Q2 Matrix Power Computation and Application View
Show that, for every natural integer $k$, $P ^ { ( k + 1 ) } = P ^ { ( k ) } T$.
grandes-ecoles 2021 Q3 Matrix Entry and Coefficient Identities View
Show that $$\|A - B\|_{F}^{2} = \sum_{1 \leqslant i,j \leqslant n} p_{i,j}^{2}\left(\lambda_{i}(A) - \lambda_{j}(B)\right)^{2}.$$
grandes-ecoles 2021 Q3 Matrix Entry and Coefficient Identities View
Show that $$\|A - B\|_{F}^{2} = \sum_{1 \leqslant i,j \leqslant n} p_{i,j}^{2} \left(\lambda_{i}(A) - \lambda_{j}(B)\right)^{2}.$$
grandes-ecoles 2021 Q4 Matrix Norm, Convergence, and Inequality View
We denote $\mathcal{B}_{n}(\mathbb{R})$ the set of doubly stochastic matrices in $\mathcal{M}_{n}(\mathbb{R})$, that is the set of matrices $M = \left(m_{i,j}\right)_{1 \leqslant i,j \leqslant n}$ whose coefficients are all non-negative and such that $\sum_{j=1}^{n} m_{i,j} = \sum_{j=1}^{n} m_{j,i} = 1$ for every $i \in \llbracket 1, n \rrbracket$.
We denote $f : \left|\, \begin{array}{ccc} \mathcal{M}_{n}(\mathbb{R}) & \rightarrow & \mathbb{R} \\ M & \mapsto & \sum_{1 \leqslant i,j \leqslant n} m_{i,j}\left(\lambda_{i}(A) - \lambda_{j}(B)\right)^{2}. \end{array}\right.$
Justify that $f$ admits a minimum on $\mathcal{B}_{n}(\mathbb{R})$.
grandes-ecoles 2021 Q4 Matrix Norm, Convergence, and Inequality View
We denote $\mathcal{B}_{n}(\mathbb{R})$ the set of bistochastic matrices in $\mathcal{M}_{n}(\mathbb{R})$, that is the set of matrices $M = (m_{i,j})_{1 \leqslant i,j \leqslant n}$ whose coefficients are all non-negative and such that $\sum_{j=1}^{n} m_{i,j} = \sum_{j=1}^{n} m_{j,i} = 1$ for every $i \in \llbracket 1, n \rrbracket$.
We denote $f : \left|\, \begin{array}{ccc} \mathcal{M}_{n}(\mathbb{R}) & \rightarrow & \mathbb{R} \\ M & \mapsto & \sum_{1 \leqslant i,j \leqslant n} m_{i,j}\left(\lambda_{i}(A) - \lambda_{j}(B)\right)^{2} \end{array}\right.$
Justify that $f$ admits a minimum on $\mathcal{B}_{n}(\mathbb{R})$.
grandes-ecoles 2021 Q4 Linear Transformation and Endomorphism Properties View
Suppose that the sequence of vectors $\left( P ^ { ( k ) } \right) _ { k \in \mathbb { N } }$ converges to a vector $P = \left( p _ { 1 } , \ldots , p _ { n } \right)$. Show that $P T = P$, that for all $i \in \llbracket 1 , n \rrbracket$, $p _ { i } \geqslant 0$ and that $p _ { 1 } + \cdots + p _ { n } = 1$.
grandes-ecoles 2021 Q4 Linear Transformation and Endomorphism Properties View
Let $n \in \mathbb{N}$. We consider $n+1$ distinct points in $I$, denoted $x_0 < x_1 < \cdots < x_n$, and a continuous function $f$ from $I$ to $\mathbb{R}$.
Show that the linear map $\varphi : \left|\,\begin{array}{ccl} \mathbb{R}_n[X] & \rightarrow & \mathbb{R}^{n+1} \\ P & \mapsto & \left(P(x_0), P(x_1), \ldots, P(x_n)\right) \end{array}\right.$ is an isomorphism.
grandes-ecoles 2021 Q5 Matrix Norm, Convergence, and Inequality View
We denote $\mathcal{B}_{n}(\mathbb{R})$ the set of doubly stochastic matrices in $\mathcal{M}_{n}(\mathbb{R})$, that is the set of matrices $M = \left(m_{i,j}\right)_{1 \leqslant i,j \leqslant n}$ whose coefficients are all non-negative and such that $\sum_{j=1}^{n} m_{i,j} = \sum_{j=1}^{n} m_{j,i} = 1$ for every $i \in \llbracket 1, n \rrbracket$.
We denote $f : \left|\, \begin{array}{ccc} \mathcal{M}_{n}(\mathbb{R}) & \rightarrow & \mathbb{R} \\ M & \mapsto & \sum_{1 \leqslant i,j \leqslant n} m_{i,j}\left(\lambda_{i}(A) - \lambda_{j}(B)\right)^{2}. \end{array}\right.$
Let $(i,j,k) \in \llbracket 1,n \rrbracket^{3}$ such that $j \geqslant i$ and $k \geqslant i$. Show that, for $M \in \mathcal{M}_{n}(\mathbb{R})$ and for $x \in \mathbb{R}^{+}$, $$f\left(M + xE_{ii} + xE_{jk} - xE_{ik} - xE_{ji}\right) - f(M) = 2x\left(\lambda_{i}(A) - \lambda_{j}(A)\right)\left(\lambda_{k}(B) - \lambda_{i}(B)\right) \leqslant 0$$
grandes-ecoles 2021 Q5 Matrix Norm, Convergence, and Inequality View
We denote $\mathcal{B}_{n}(\mathbb{R})$ the set of bistochastic matrices in $\mathcal{M}_{n}(\mathbb{R})$, that is the set of matrices $M = (m_{i,j})_{1 \leqslant i,j \leqslant n}$ whose coefficients are all non-negative and such that $\sum_{j=1}^{n} m_{i,j} = \sum_{j=1}^{n} m_{j,i} = 1$ for every $i \in \llbracket 1, n \rrbracket$.
We denote $f : \left|\, \begin{array}{ccc} \mathcal{M}_{n}(\mathbb{R}) & \rightarrow & \mathbb{R} \\ M & \mapsto & \sum_{1 \leqslant i,j \leqslant n} m_{i,j}\left(\lambda_{i}(A) - \lambda_{j}(B)\right)^{2} \end{array}\right.$
Let $(i,j,k) \in \llbracket 1,n \rrbracket^{3}$ such that $j \geqslant i$ and $k \geqslant i$. Show that, for $M \in \mathcal{M}_{n}(\mathbb{R})$ and for $x \in \mathbb{R}^{+}$, $$f\left(M + xE_{ii} + xE_{jk} - xE_{ik} - xE_{ji}\right) - f(M) = 2x\left(\lambda_{i}(A) - \lambda_{j}(A)\right)\left(\lambda_{k}(B) - \lambda_{i}(B)\right) \leqslant 0$$
grandes-ecoles 2021 Q5 Structured Matrix Characterization View
We consider the directed graph $G = ( S , A )$ where $$\left\{ \begin{array} { l } S = \{ 1,2,3,4 \} \\ A = \{ ( 1,2 ) , ( 2,1 ) , ( 1,3 ) , ( 3,1 ) , ( 1,4 ) , ( 4,1 ) , ( 2,3 ) , ( 3,2 ) , ( 2,4 ) , ( 4,2 ) , ( 3,4 ) , ( 4,3 ) \} \end{array} \right.$$ We assume that, when the point is on one of the vertices of the graph, it has the same probability of going to each of the three other vertices of the graph. We set $$J _ { 4 } = \left( \begin{array} { l l l l } 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \end{array} \right)$$ Express the transition matrix $T$ as a function of $J _ { 4 }$ and $I _ { 4 }$.
grandes-ecoles 2021 Q6 Matrix Norm, Convergence, and Inequality View
We denote $\mathcal{B}_{n}(\mathbb{R})$ the set of doubly stochastic matrices in $\mathcal{M}_{n}(\mathbb{R})$, that is the set of matrices $M = \left(m_{i,j}\right)_{1 \leqslant i,j \leqslant n}$ whose coefficients are all non-negative and such that $\sum_{j=1}^{n} m_{i,j} = \sum_{j=1}^{n} m_{j,i} = 1$ for every $i \in \llbracket 1, n \rrbracket$.
We denote $f : \left|\, \begin{array}{ccc} \mathcal{M}_{n}(\mathbb{R}) & \rightarrow & \mathbb{R} \\ M & \mapsto & \sum_{1 \leqslant i,j \leqslant n} m_{i,j}\left(\lambda_{i}(A) - \lambda_{j}(B)\right)^{2}. \end{array}\right.$
Let $n \geqslant 2$ and $M = \left(m_{i,j}\right)_{1 \leqslant i,j \leqslant n} \in \mathcal{B}_{n}(\mathbb{R})$ a matrix different from the identity. We denote $i$ the smallest integer belonging to $\llbracket 1,n \rrbracket$ such that $m_{i,i} \neq 1$. Show that there exists a matrix $M^{\prime} = \left(m_{i,j}^{\prime}\right)_{1 \leqslant i,j \leqslant n} \in \mathcal{B}_{n}(\mathbb{R})$ such that $f\left(M^{\prime}\right) \leqslant f(M)$ and $m_{j,j}^{\prime} = 1$ for every $j \in \llbracket 1,i \rrbracket$.
grandes-ecoles 2021 Q6 Matrix Norm, Convergence, and Inequality View
We denote $\mathcal{B}_{n}(\mathbb{R})$ the set of bistochastic matrices in $\mathcal{M}_{n}(\mathbb{R})$, that is the set of matrices $M = (m_{i,j})_{1 \leqslant i,j \leqslant n}$ whose coefficients are all non-negative and such that $\sum_{j=1}^{n} m_{i,j} = \sum_{j=1}^{n} m_{j,i} = 1$ for every $i \in \llbracket 1, n \rrbracket$.
We denote $f : \left|\, \begin{array}{ccc} \mathcal{M}_{n}(\mathbb{R}) & \rightarrow & \mathbb{R} \\ M & \mapsto & \sum_{1 \leqslant i,j \leqslant n} m_{i,j}\left(\lambda_{i}(A) - \lambda_{j}(B)\right)^{2} \end{array}\right.$
Let $n \geqslant 2$ and $M = (m_{i,j})_{1 \leqslant i,j \leqslant n} \in \mathcal{B}_{n}(\mathbb{R})$ a matrix different from the identity. We denote $i$ the smallest integer belonging to $\llbracket 1,n \rrbracket$ such that $m_{i,i} \neq 1$. Show that there exists a matrix $M^{\prime} = (m_{i,j}^{\prime})_{1 \leqslant i,j \leqslant n} \in \mathcal{B}_{n}(\mathbb{R})$ such that $f(M^{\prime}) \leqslant f(M)$ and $m_{j,j}^{\prime} = 1$ for every $j \in \llbracket 1, i \rrbracket$.
grandes-ecoles 2021 Q6 Matrix Decomposition and Factorization View
We consider the directed graph $G = ( S , A )$ where $$\left\{ \begin{array} { l } S = \{ 1,2,3,4 \} \\ A = \{ ( 1,2 ) , ( 2,1 ) , ( 1,3 ) , ( 3,1 ) , ( 1,4 ) , ( 4,1 ) , ( 2,3 ) , ( 3,2 ) , ( 2,4 ) , ( 4,2 ) , ( 3,4 ) , ( 4,3 ) \} \end{array} \right.$$ We assume that, when the point is on one of the vertices of the graph, it has the same probability of going to each of the three other vertices of the graph. We set $$J _ { 4 } = \left( \begin{array} { l l l l } 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \end{array} \right)$$ Prove that there exists a matrix $Q \in \mathcal { O } _ { 4 } ( \mathbb { R } )$ such that $$T = \frac { 1 } { 3 } Q \left( \begin{array} { c c c c } - 1 & 0 & 0 & 0 \\ 0 & - 1 & 0 & 0 \\ 0 & 0 & - 1 & 0 \\ 0 & 0 & 0 & 3 \end{array} \right) Q ^ { \top }$$
grandes-ecoles 2021 Q7 Matrix Norm, Convergence, and Inequality View
We denote $\mathcal{B}_{n}(\mathbb{R})$ the set of doubly stochastic matrices in $\mathcal{M}_{n}(\mathbb{R})$, that is the set of matrices $M = \left(m_{i,j}\right)_{1 \leqslant i,j \leqslant n}$ whose coefficients are all non-negative and such that $\sum_{j=1}^{n} m_{i,j} = \sum_{j=1}^{n} m_{j,i} = 1$ for every $i \in \llbracket 1, n \rrbracket$.
We denote $f : \left|\, \begin{array}{ccc} \mathcal{M}_{n}(\mathbb{R}) & \rightarrow & \mathbb{R} \\ M & \mapsto & \sum_{1 \leqslant i,j \leqslant n} m_{i,j}\left(\lambda_{i}(A) - \lambda_{j}(B)\right)^{2}. \end{array}\right.$
Deduce that $$\min\left\{f(M) \mid M \in \mathcal{B}_{n}(\mathbb{R})\right\} = f\left(I_{n}\right)$$
grandes-ecoles 2021 Q7 Matrix Norm, Convergence, and Inequality View
We denote $\mathcal{B}_{n}(\mathbb{R})$ the set of bistochastic matrices in $\mathcal{M}_{n}(\mathbb{R})$, that is the set of matrices $M = (m_{i,j})_{1 \leqslant i,j \leqslant n}$ whose coefficients are all non-negative and such that $\sum_{j=1}^{n} m_{i,j} = \sum_{j=1}^{n} m_{j,i} = 1$ for every $i \in \llbracket 1, n \rrbracket$.
We denote $f : \left|\, \begin{array}{ccc} \mathcal{M}_{n}(\mathbb{R}) & \rightarrow & \mathbb{R} \\ M & \mapsto & \sum_{1 \leqslant i,j \leqslant n} m_{i,j}\left(\lambda_{i}(A) - \lambda_{j}(B)\right)^{2} \end{array}\right.$
Deduce that $$\min\left\{f(M) \mid M \in \mathcal{B}_{n}(\mathbb{R})\right\} = f(I_{n})$$
grandes-ecoles 2021 Q7 Matrix Power Computation and Application View
We consider the directed graph $G = ( S , A )$ where $$\left\{ \begin{array} { l } S = \{ 1,2,3,4 \} \\ A = \{ ( 1,2 ) , ( 2,1 ) , ( 1,3 ) , ( 3,1 ) , ( 1,4 ) , ( 4,1 ) , ( 2,3 ) , ( 3,2 ) , ( 2,4 ) , ( 4,2 ) , ( 3,4 ) , ( 4,3 ) \} \end{array} \right.$$ We assume that, when the point is on one of the vertices of the graph, it has the same probability of going to each of the three other vertices of the graph. Show that the sequence of matrices $\left( T ^ { k } \right) _ { k \in \mathbb { N } }$ converges and identify geometrically the endomorphism canonically associated with the limit matrix.
grandes-ecoles 2021 Q8 Matrix Norm, Convergence, and Inequality View
Deduce that $$\forall (A,B) \in \mathcal{S}_{n}(\mathbb{R})^{2}, \quad \sum_{i=1}^{n}\left(\lambda_{i}(A) - \lambda_{i}(B)\right)^{2} \leqslant \|A - B\|_{F}^{2}.$$