LFM Pure

View all 1553 questions →

grandes-ecoles 2021 Q8 Matrix Norm, Convergence, and Inequality View
Deduce that $$\forall (A,B) \in \mathcal{S}_{n}(\mathbb{R})^{2}, \quad \sum_{i=1}^{n} \left(\lambda_{i}(A) - \lambda_{i}(B)\right)^{2} \leqslant \|A - B\|_{F}^{2}.$$
grandes-ecoles 2021 Q8 Matrix Power Computation and Application View
We consider the directed graph $G = ( S , A )$ where $$\left\{ \begin{array} { l } S = \{ 1,2,3,4 \} \\ A = \{ ( 1,2 ) , ( 2,1 ) , ( 1,3 ) , ( 3,1 ) , ( 1,4 ) , ( 4,1 ) , ( 2,3 ) , ( 3,2 ) , ( 2,4 ) , ( 4,2 ) , ( 3,4 ) , ( 4,3 ) \} \end{array} \right.$$ We assume that, when the point is on one of the vertices of the graph, it has the same probability of going to each of the three other vertices of the graph. Show that, for any row vector $P ^ { ( 0 ) } = \left( p _ { 1 } ^ { ( 0 ) } , p _ { 2 } ^ { ( 0 ) } , p _ { 3 } ^ { ( 0 ) } , p _ { 4 } ^ { ( 0 ) } \right)$, where for $1 \leqslant i \leqslant 4$, $p _ { i } ^ { ( 0 ) }$ is the probability that the point is initially on vertex $i$, the sequence $\left( P ^ { ( k ) } \right) _ { k \in \mathbb { N } }$ converges to the row vector $( 1 / 4,1 / 4,1 / 4,1 / 4 )$.
grandes-ecoles 2021 Q9 Matrix Algebra and Product Properties View
We consider the graph $G$ represented in Figure 2. We recall that, when the point is on one of the vertices of the graph, it has the same probability of going to each of the vertices to which it is connected. We assume that initially, the point is on vertex 1, so that $P ^ { ( 0 ) } = ( 1,0,0,0,0,0,0,0 )$. We denote $S _ { 1 } = \{ 1,3,6,8 \}$ and $S _ { 2 } = \{ 2,4,5,7 \}$.
Give the transition matrix $T$ of this graph and calculate $$( 1,1,1,1,1,1,1,1 ) T .$$
grandes-ecoles 2021 Q12 Linear System and Inverse Existence View
Let $M \in M _ { n } ( \mathbb { R } )$, all of whose coefficients are non-negative. Show that $M$ is a stochastic matrix if and only if $$M \left( \begin{array} { c } 1 \\ \vdots \\ 1 \end{array} \right) = \left( \begin{array} { c } 1 \\ \vdots \\ 1 \end{array} \right) .$$
grandes-ecoles 2021 Q13 Linear System and Inverse Existence View
Show that the transition matrix of a graph (defined in part I) is a stochastic matrix and that, for every natural integer $k$, the vector $P ^ { ( k ) }$, also defined in part I, is a probability distribution.
grandes-ecoles 2021 Q14 Linear Transformation and Endomorphism Properties View
Let $M \in \mathcal { M } _ { n } ( \mathbb { R } )$ and $N \in \mathcal { M } _ { n } ( \mathbb { R } )$ be two stochastic matrices, $X \in \mathbb { R } ^ { n }$ a probability distribution and $\alpha \in [ 0,1 ]$. Show that $X M$ is a probability distribution.
grandes-ecoles 2021 Q15 Matrix Algebra and Product Properties View
Let $M \in \mathcal { M } _ { n } ( \mathbb { R } )$ and $N \in \mathcal { M } _ { n } ( \mathbb { R } )$ be two stochastic matrices, $X \in \mathbb { R } ^ { n }$ a probability distribution and $\alpha \in [ 0,1 ]$. Show that $M N$ is a stochastic matrix.
grandes-ecoles 2021 Q16 Matrix Algebra and Product Properties View
Let $M \in \mathcal { M } _ { n } ( \mathbb { R } )$ and $N \in \mathcal { M } _ { n } ( \mathbb { R } )$ be two stochastic matrices, $X \in \mathbb { R } ^ { n }$ a probability distribution and $\alpha \in [ 0,1 ]$. Show that $\alpha M + ( 1 - \alpha ) N$ is a stochastic matrix.
grandes-ecoles 2021 Q19 Linear Transformation and Endomorphism Properties View
We now assume that all coefficients $m _ { i , j } ( 1 \leqslant i , j \leqslant n )$ of the stochastic matrix $M$ are strictly positive.
Prove that $\operatorname { dim } \left( \operatorname { ker } \left( M - I _ { n } \right) \right) = 1$.
If $\left( u _ { 1 } , \ldots , u _ { n } \right)$ denotes the components (real) in the canonical basis of a vector of $\operatorname { ker } \left( M - I _ { n } \right)$, one may use $\min _ { 1 \leqslant i \leqslant n } u _ { i }$.
grandes-ecoles 2021 Q20 Linear Transformation and Endomorphism Properties View
We now assume that all coefficients $m _ { i , j } ( 1 \leqslant i , j \leqslant n )$ of the stochastic matrix $M$ are strictly positive.
Deduce that there exists at most one probability distribution $X$ invariant by $M$, that is, satisfying $X M = X$.
grandes-ecoles 2021 Q21 Matrix Power Computation and Application View
We now assume that all coefficients $m _ { i , j } ( 1 \leqslant i , j \leqslant n )$ of the stochastic matrix $M$ are strictly positive. We set $\varepsilon = \min _ { 1 \leqslant i , j \leqslant n } m _ { i , j }$. We are interested in the sequence $\left( M ^ { k } \right) _ { k \in \mathbb { N } }$ of powers of $M$. We denote by $m _ { i , j } ^ { ( k ) }$ the coefficient of the matrix $M ^ { k }$ located in row $i$ and column $j$.
For all $j \in \llbracket 1 , n \rrbracket$, we set $$\left\{ \begin{array} { l } \alpha _ { j } ^ { ( k ) } = \min _ { 1 \leqslant i \leqslant n } m _ { i , j } ^ { ( k ) } , \\ \beta _ { j } ^ { ( k ) } = \max _ { 1 \leqslant i \leqslant n } m _ { i , j } ^ { ( k ) } . \end{array} \right.$$
In the following four questions, $j$ is a fixed integer in $\llbracket 1 , n \rrbracket$ and $k$ is fixed in $\mathbb { N }$. Prove the inequalities $\alpha _ { j } ^ { ( k ) } \leqslant \alpha _ { j } ^ { ( k + 1 ) } \leqslant \beta _ { j } ^ { ( k + 1 ) } \leqslant \beta _ { j } ^ { ( k ) }$.
grandes-ecoles 2021 Q22 Matrix Entry and Coefficient Identities View
We now assume that all coefficients $m _ { i , j } ( 1 \leqslant i , j \leqslant n )$ of the stochastic matrix $M$ are strictly positive. We set $\varepsilon = \min _ { 1 \leqslant i , j \leqslant n } m _ { i , j }$. We are interested in the sequence $\left( M ^ { k } \right) _ { k \in \mathbb { N } }$ of powers of $M$. We denote by $m _ { i , j } ^ { ( k ) }$ the coefficient of the matrix $M ^ { k }$ located in row $i$ and column $j$.
For all $j \in \llbracket 1 , n \rrbracket$, we set $$\left\{ \begin{array} { l } \alpha _ { j } ^ { ( k ) } = \min _ { 1 \leqslant i \leqslant n } m _ { i , j } ^ { ( k ) } , \\ \beta _ { j } ^ { ( k ) } = \max _ { 1 \leqslant i \leqslant n } m _ { i , j } ^ { ( k ) } . \end{array} \right.$$
In the following four questions, $j$ is a fixed integer in $\llbracket 1 , n \rrbracket$ and $k$ is fixed in $\mathbb { N }$. Prove that there exists a pair $\left( i _ { 0 } , j _ { 0 } \right) \in \llbracket 1 , n \rrbracket ^ { 2 }$ such that $$\alpha _ { j } ^ { ( k + 1 ) } - \alpha _ { j } ^ { ( k ) } \geqslant m _ { i _ { 0 } , j _ { 0 } } \left( \beta _ { j } ^ { ( k ) } - \alpha _ { j } ^ { ( k ) } \right) .$$
grandes-ecoles 2021 Q23 Matrix Entry and Coefficient Identities View
We now assume that all coefficients $m _ { i , j } ( 1 \leqslant i , j \leqslant n )$ of the stochastic matrix $M$ are strictly positive. We set $\varepsilon = \min _ { 1 \leqslant i , j \leqslant n } m _ { i , j }$. We are interested in the sequence $\left( M ^ { k } \right) _ { k \in \mathbb { N } }$ of powers of $M$. We denote by $m _ { i , j } ^ { ( k ) }$ the coefficient of the matrix $M ^ { k }$ located in row $i$ and column $j$.
For all $j \in \llbracket 1 , n \rrbracket$, we set $$\left\{ \begin{array} { l } \alpha _ { j } ^ { ( k ) } = \min _ { 1 \leqslant i \leqslant n } m _ { i , j } ^ { ( k ) } , \\ \beta _ { j } ^ { ( k ) } = \max _ { 1 \leqslant i \leqslant n } m _ { i , j } ^ { ( k ) } . \end{array} \right.$$
In the following four questions, $j$ is a fixed integer in $\llbracket 1 , n \rrbracket$ and $k$ is fixed in $\mathbb { N }$. Prove that there exists a pair $\left( i _ { 1 } , j _ { 1 } \right) \in \llbracket 1 , n \rrbracket ^ { 2 }$ such that $$\beta _ { j } ^ { ( k ) } - \beta _ { j } ^ { ( k + 1 ) } \geqslant m _ { i _ { 1 } , j _ { 1 } } \left( \beta _ { j } ^ { ( k ) } - \alpha _ { j } ^ { ( k ) } \right) .$$
grandes-ecoles 2021 Q24 Determinant and Rank Computation View
For all $n \in \mathbb { N }$, let $G _ { n } = \left( \left( X ^ { i - 1 } \mid X ^ { j - 1 } \right) \right) _ { 1 \leqslant i , j \leqslant n + 1 }$ be the Gram matrix and let $\left( V _ { n } \right) _ { n \in \mathbb { N } }$ be an orthogonal system. Let $Q _ { n } = \left( q _ { i , j } \right) _ { 1 \leqslant i , j \leqslant n + 1 }$ be the matrix of the family $\left( V _ { 0 } , V _ { 1 } , \ldots , V _ { n } \right)$ in the basis $\left( 1 , X , \ldots , X ^ { n } \right)$ of $\mathbb { R } _ { n } [ X ]$. Show that $Q _ { n }$ is upper triangular and that $\operatorname { det } Q _ { n } = 1$.
grandes-ecoles 2021 Q24 Matrix Norm, Convergence, and Inequality View
We now assume that all coefficients $m _ { i , j } ( 1 \leqslant i , j \leqslant n )$ of the stochastic matrix $M$ are strictly positive. We set $\varepsilon = \min _ { 1 \leqslant i , j \leqslant n } m _ { i , j }$. We are interested in the sequence $\left( M ^ { k } \right) _ { k \in \mathbb { N } }$ of powers of $M$. We denote by $m _ { i , j } ^ { ( k ) }$ the coefficient of the matrix $M ^ { k }$ located in row $i$ and column $j$.
For all $j \in \llbracket 1 , n \rrbracket$, we set $$\left\{ \begin{array} { l } \alpha _ { j } ^ { ( k ) } = \min _ { 1 \leqslant i \leqslant n } m _ { i , j } ^ { ( k ) } , \\ \beta _ { j } ^ { ( k ) } = \max _ { 1 \leqslant i \leqslant n } m _ { i , j } ^ { ( k ) } . \end{array} \right.$$
Deduce that $\beta _ { j } ^ { ( k + 1 ) } - \alpha _ { j } ^ { ( k + 1 ) } \leqslant ( 1 - 2 \varepsilon ) \left( \beta _ { j } ^ { ( k ) } - \alpha _ { j } ^ { ( k ) } \right)$.
grandes-ecoles 2021 Q25 Matrix Algebra and Product Properties View
For all $n \in \mathbb { N }$, let $G _ { n } = \left( \left( X ^ { i - 1 } \mid X ^ { j - 1 } \right) \right) _ { 1 \leqslant i , j \leqslant n + 1 }$, let $G _ { n } ^ { \prime } = \left( \left( V _ { i - 1 } \mid V _ { j - 1 } \right) \right) _ { 1 \leqslant i , j \leqslant n + 1 }$, and let $Q _ { n }$ be the matrix of the family $\left( V _ { 0 } , V _ { 1 } , \ldots , V _ { n } \right)$ in the basis $\left( 1 , X , \ldots , X ^ { n } \right)$. Show that $Q _ { n } ^ { \top } G _ { n } Q _ { n } = G _ { n } ^ { \prime }$, where $Q _ { n } ^ { \top }$ is the transpose of the matrix $Q _ { n }$.
grandes-ecoles 2021 Q25 Matrix Power Computation and Application View
We now assume that all coefficients $m _ { i , j } ( 1 \leqslant i , j \leqslant n )$ of the stochastic matrix $M$ are strictly positive. We set $\varepsilon = \min _ { 1 \leqslant i , j \leqslant n } m _ { i , j }$.
Prove that the sequence $\left( M ^ { k } \right)$ converges to a stochastic matrix $B = \left( \begin{array} { l l l } b _ { 1 } & \cdots & b _ { n } \\ b _ { 1 } & \cdots & b _ { n } \\ b _ { 1 } & \cdots & b _ { n } \end{array} \right)$ all of whose rows are equal.
grandes-ecoles 2021 Q26 Determinant and Rank Computation View
For all $n \in \mathbb { N }$, let $G _ { n } = \left( \left( X ^ { i - 1 } \mid X ^ { j - 1 } \right) \right) _ { 1 \leqslant i , j \leqslant n + 1 }$ and let $\left( V _ { n } \right) _ { n \in \mathbb { N } }$ be an orthogonal system. Deduce that $\operatorname { det } G _ { n } = \prod _ { i = 0 } ^ { n } \left\| V _ { i } \right\| ^ { 2 }$.
grandes-ecoles 2021 Q29 Structured Matrix Characterization View
We model the web by a directed graph with $n$ vertices. For every integer $i \in \llbracket 1 , n \rrbracket$, $\lambda _ { i }$ denotes the number of outgoing edges from page $i$. We assume that no page points to itself. A surfer navigates the web in the following way: when on page $i$,
  • if page $i$ points to other pages, he goes randomly, with equal probability, to one of these pages;
  • if page $i$ points to no other page, he remains on page $i$.
Verify that the transition matrix associated with this navigation model is the matrix $A = \left( a _ { i , j } \right) _ { 1 \leqslant i , j \leqslant n }$ with $$\left\{ \begin{array} { l } a _ { i , i } = \begin{cases} 1 & \text { if page } i \text { points to no other page } \\ 0 & \text { otherwise } \end{cases} \\ a _ { i , j } = \left\{ \begin{array} { l l } 0 & \text { if } i \nrightarrow j \\ 1 / \lambda _ { i } & \text { if } i \rightarrow j \end{array} \quad \text { for } i \neq j \right. \end{array} \right.$$
grandes-ecoles 2021 Q30 Structured Matrix Characterization View
We model the web by a directed graph with $n$ vertices. The matrix $A$ is the stochastic matrix described in question 29. We define $$B = ( 1 - \alpha ) A + \frac { \alpha } { n } J _ { n }$$ where $J _ { n }$ is the matrix in $\mathcal { M } _ { n } ( \mathbb { R } )$ whose coefficients are all equal to $1$, $A$ is the stochastic matrix described in question 29 and $\alpha$ is a real number in $] 0,1 [$, called the damping factor.
Show that $B$ is a stochastic matrix whose coefficients are all strictly positive.
grandes-ecoles 2021 Q31 Linear Transformation and Endomorphism Properties View
We model the web by a directed graph with $n$ vertices. The matrix $A$ is the stochastic matrix described in question 29. We define $$B = ( 1 - \alpha ) A + \frac { \alpha } { n } J _ { n }$$ where $J _ { n }$ is the matrix in $\mathcal { M } _ { n } ( \mathbb { R } )$ whose coefficients are all equal to $1$, $A$ is the stochastic matrix described in question 29 and $\alpha$ is a real number in $] 0,1 [$, called the damping factor.
In the navigation model admitting $B$ as its transition matrix, give the probability of leaving a page containing no links to another page.
grandes-ecoles 2021 Q32 Matrix Power Computation and Application View
We model the web by a directed graph with $n$ vertices. The matrix $A$ is the stochastic matrix described in question 29. We define $$B = ( 1 - \alpha ) A + \frac { \alpha } { n } J _ { n }$$ where $J _ { n }$ is the matrix in $\mathcal { M } _ { n } ( \mathbb { R } )$ whose coefficients are all equal to $1$, $A$ is the stochastic matrix described in question 29 and $\alpha$ is a real number in $] 0,1 [$, called the damping factor.
Let $Q$ be a probability distribution. We define the sequence $\left( Q ^ { ( k ) } \right) _ { k \in \mathbb { N } }$ by $Q ^ { ( k ) } = Q B ^ { k }$ for every natural number $k$. Prove that the sequence $\left( Q ^ { ( k ) } \right) _ { k \in \mathbb { N } }$ converges and that its limit $Q ^ { \infty }$ satisfies conditions (i) and (ii) described in the introduction to this part. It thus provides relevance scores for the $n$ pages of the web. The relevance of each page $j$ will be expressed as a function of those of the pages pointing to it, distinguishing between pages that point to another page and the others.
grandes-ecoles 2022 Q1 Diagonalizability and Similarity View
Prove that a matrix $A \in \mathcal{M}_{n}(\mathbb{R})$ is orthodiagonalizable if and only if it is symmetric.
grandes-ecoles 2022 Q1 Matrix Algebra and Product Properties View
We are given two matrices $A$ and $B$ in $\mathcal{M}_n(\mathbf{K})$. We assume that $A$ and $B$ commute.
$\mathbf{1}$ ▷ Show that the matrices $A$ and $e^{B}$ commute.
grandes-ecoles 2022 Q1 Diagonalizability and Similarity View
Let $A$ be the matrix in $M_{2}(\mathbf{R})$ defined by: $$A = \left(\begin{array}{cc} 1 & 1 \\ -1 & 3 \end{array}\right)$$ Is the matrix $A$ semi-simple?