LFM Pure

View all 1553 questions →

grandes-ecoles 2013 Q21 Matrix Norm, Convergence, and Inequality View
In the optimal version of the Jacobi algorithm, we choose for each $m$ a pair $(p_m, q_m)$ such that the absolute value of the coefficient $\sigma_{ij}^{(m)}$ is maximal precisely when $(i,j) = (p_m, q_m)$. In other words, $$\forall i < j, \quad \left|\sigma_{ij}^{(m)}\right| \leqslant \left|\sigma_{p_m q_m}^{(m)}\right|$$
Let $m \in \mathbb{N}$. Show that $$\left\|D - D^{(m)}\right\| \leqslant \frac{\rho^m}{1 - \rho} \left\|E^{(0)}\right\|$$
grandes-ecoles 2013 Q22 Linear Transformation and Endomorphism Properties View
Let $\mathcal{U}_q$ be the set of endomorphisms $\phi \in \mathcal{L}(V)$ that are compatible with $P_a$, and $\Psi_a : \mathcal{U}_q \rightarrow \mathcal{L}(W_{\ell})$ the unique algebra morphism such that $\Psi_a(\phi) \circ P_a = P_a \circ \phi$ for all $\phi \in \mathcal{U}_q$. Let $W$ be a non-zero subspace of $W_{\ell}$ stable under $\Psi_a(H)$.
22a. Show that $W$ contains at least one of the vectors $v_i$.
22b. What can be said if $W$ is moreover stable under $\Psi_a(E)$?
grandes-ecoles 2013 Q23 Eigenvalue and Characteristic Polynomial Analysis View
Let $\mathcal{U}_q$ be the set of endomorphisms $\phi \in \mathcal{L}(V)$ that are compatible with $P_a$, and $\Psi_a : \mathcal{U}_q \rightarrow \mathcal{L}(W_{\ell})$ the unique algebra morphism such that $\Psi_a(\phi) \circ P_a = P_a \circ \phi$ for all $\phi \in \mathcal{U}_q$. Give a necessary and sufficient condition on $R(\lambda(0), \mu(0), q)$ for the operator $\Psi_a(F)$ to be nilpotent.
grandes-ecoles 2014 QIA Matrix Norm, Convergence, and Inequality View
Show that, for every polynomial $P \in \mathbb{C}[X]$, the map $f_P : A \mapsto P(A)$ is a continuous function from $\mathcal{M}_d(\mathbb{R})$ to $\mathcal{M}_d(\mathbb{C})$.
grandes-ecoles 2014 QIB Matrix Norm, Convergence, and Inequality View
Show that the map $(A, B) \mapsto \operatorname{Tr}\left({}^t A \times B\right)$ is an inner product on the space $\mathcal{M}_d(\mathbb{R})$.
grandes-ecoles 2014 QIC Matrix Norm, Convergence, and Inequality View
In the rest of the problem, we denote by $\|\cdot\|$ the norm associated with the inner product $(A, B) \mapsto \operatorname{Tr}({}^t A \times B)$.
For all integers $i, j$ between 1 and $d$ and every matrix $A \in \mathcal{M}_d(\mathbb{R})$, compare $\left|A_{i,j}\right|$ and $\|A\|$.
grandes-ecoles 2014 QID Matrix Norm, Convergence, and Inequality View
In the rest of the problem, we denote by $\|\cdot\|$ the norm associated with the inner product $(A, B) \mapsto \operatorname{Tr}({}^t A \times B)$.
Show that: $\forall (A, B) \in \mathcal{M}_d(\mathbb{R})^2, \|A \times B\| \leqslant \|A\| \cdot \|B\|$.
grandes-ecoles 2014 QIE Matrix Norm, Convergence, and Inequality View
In the rest of the problem, we denote by $\|\cdot\|$ the norm associated with the inner product $(A, B) \mapsto \operatorname{Tr}({}^t A \times B)$.
For $n \in \mathbb{N}^*$ and $A \in \mathcal{M}_d(\mathbb{R})$, compare $\left\|A^n\right\|$ and $\|A\|^n$.
grandes-ecoles 2014 QIIIA2 Matrix Power Computation and Application View
For $(A, B) \in \mathcal{M}_d(\mathbb{R})^2$ such that $A$ and $B$ commute, show that $\exp(\mathrm{i}A)\exp(\mathrm{i}B) = \exp(\mathrm{i}(A+B))$.
grandes-ecoles 2014 QIIIA3 Matrix Algebra and Product Properties View
For every $A \in \mathcal{M}_d(\mathbb{R})$, we define $$\cos(A) = \sum_{n=0}^{+\infty} (-1)^n \frac{A^{2n}}{(2n)!} \quad \text{and} \quad \sin(A) = \sum_{n=0}^{+\infty} (-1)^n \frac{A^{2n+1}}{(2n+1)!}$$ Show $$\forall A \in \mathcal{M}_d(\mathbb{R}), \quad \cos(A)^2 + \sin(A)^2 = I_d$$
grandes-ecoles 2014 QIIIB1 Linear System and Inverse Existence View
We fix a matrix $A \in \mathcal{M}_d(\mathbb{R})$.
For $R$ large enough, show that, for every $\theta \in \mathbb{R}$, the matrix $(R\mathrm{e}^{\mathrm{i}\theta} I_d - A)$ is invertible in $\mathcal{M}_d(\mathbb{C})$, and that its inverse is the matrix $$\left(R\mathrm{e}^{\mathrm{i}\theta}\right)^{-1} \sum_{n=0}^{+\infty} \left(R\mathrm{e}^{\mathrm{i}\theta}\right)^{-n} A^n$$
grandes-ecoles 2014 QIIIB2 Matrix Power Computation and Application View
We fix a matrix $A \in \mathcal{M}_d(\mathbb{R})$.
Show that, for every $n \in \mathbb{N}^*$ and every $R$ large enough, the matrix $$\frac{1}{2\pi} \int_0^{2\pi} \left(R\mathrm{e}^{\mathrm{i}\theta}\right)^n \left(R\mathrm{e}^{\mathrm{i}\theta} I_d - A\right)^{-1} \mathrm{~d}\theta$$ equals $A^{n-1}$.
grandes-ecoles 2014 QIIIB3 Matrix Power Computation and Application View
We fix a matrix $A \in \mathcal{M}_d(\mathbb{R})$. We consider the characteristic polynomial $$\chi_A(X) = \det\left(A - X \cdot I_d\right) = \sum_{k=0}^d a_k X^k$$ Show that for $R$ large enough: $$\chi_A(A) = \frac{1}{2\pi} \int_0^{2\pi} \left(R\mathrm{e}^{\mathrm{i}\theta}\right) \chi_A\left(R\mathrm{e}^{\mathrm{i}\theta}\right) \left(R\mathrm{e}^{\mathrm{i}\theta} I_d - A\right)^{-1} \mathrm{~d}\theta$$
grandes-ecoles 2014 QIIIB4 Eigenvalue and Characteristic Polynomial Analysis View
We fix a matrix $A \in \mathcal{M}_d(\mathbb{R})$, with characteristic polynomial $\chi_A(X) = \det(A - X \cdot I_d) = \sum_{k=0}^d a_k X^k$.
Deduce that $\chi_A(A)$ is the zero matrix. One may use cofactor matrices.
grandes-ecoles 2014 QVB Determinant and Rank Computation View
We are given a continuous function $\xi : \mathbb{R} \rightarrow \mathbb{R}$ satisfying condition (V.1) (with $d \geqslant 2$), where $$\forall A \in \mathcal{M}_d(\mathbb{R}), \quad A \text{ invertible} \Rightarrow f_\xi(A) = \left(\xi(A_{i,j})\right)_{1\leqslant i,j\leqslant d} \text{ invertible} \tag{V.1}$$
Show $$\forall (a, b, c, d) \in \mathbb{R}^4, \quad ad \neq bc \Rightarrow \xi(a)\xi(d) \neq \xi(b)\xi(c)$$ One may consider the matrix $\begin{pmatrix} a & b & 0 & \cdots & 0 \\ c & d & 0 & \cdots & 0 \\ c & d & & & \\ \vdots & \vdots & & I_{d-2} & \\ c & d & & & \end{pmatrix}$.
grandes-ecoles 2014 QVI Determinant and Rank Computation View
For $\lambda \in \mathbb{R}$, calculate the determinant of the matrix $A_\lambda \in \mathcal{M}_d(\mathbb{R})$ having only 1's off the diagonal and only $\lambda$ on the diagonal.
grandes-ecoles 2014 QVJ Determinant and Rank Computation View
For $\lambda \in \mathbb{R}$, let $A_\lambda \in \mathcal{M}_d(\mathbb{R})$ be the matrix having only 1's off the diagonal and only $\lambda$ on the diagonal.
Deduce all continuous functions $\xi : \mathbb{R} \rightarrow \mathbb{R}$ satisfying $$\forall A \in \mathcal{M}_d(\mathbb{R}), \quad A \text{ invertible} \Rightarrow f_\xi(A) = \left(\xi(A_{i,j})\right)_{1\leqslant i,j\leqslant d} \text{ invertible} \tag{V.1}$$
grandes-ecoles 2014 QIA Linear Transformation and Endomorphism Properties View
Let $A$ be a real square matrix of size $n$ and $b$ an element of $\mathbb{R}^n$. Let $f$ be the map from $\mathbb{R}^n$ to $\mathbb{R}^n$ defined by $$\forall x \in \mathbb{R}^n \quad f(x) = Ax + b$$ Show that $f$ is of class $C^1$ and specify its Jacobian matrix $J_f(x)$ at every point $x$ of $\mathbb{R}^n$.
grandes-ecoles 2014 QIC1 Determinant and Rank Computation View
Let $f$ denote a function of class $C^1$ from $\mathbb{R}^n$ to $\mathbb{R}^n$ satisfying $f(0) = 0$. For $t$ real and $j$ an integer in $\llbracket 1, n \rrbracket$, we denote by $t_j$ the element $(0, \ldots, 0, t, 0, \ldots, 0)$ of $\mathbb{R}^n$, the real number $t$ being in position $j$.
We admit that if functions $\varphi_1, \varphi_2, \ldots, \varphi_n$ are continuous on $\mathbb{R}$ and take values in $\mathbb{R}^n$, then the function $\Phi$ defined on $\mathbb{R}$ by: $$\Phi(t) = \operatorname{det}(\varphi_1(t), \varphi_2(t), \ldots, \varphi_n(t))$$ is continuous on $\mathbb{R}$.
Using question I.B.2 and the multilinearity of the determinant, show that in a neighbourhood of 0 $$\operatorname{det}\left(f(t_1), f(t_2), \ldots, f(t_n)\right) = t^n \mathrm{jac}_f(0) + \mathrm{o}\left(t^n\right)$$
grandes-ecoles 2014 QIC2 Determinant and Rank Computation View
Let $f$ denote a function of class $C^1$ from $\mathbb{R}^n$ to $\mathbb{R}^n$ satisfying $f(0) = 0$. For $t$ real and $j$ an integer in $\llbracket 1, n \rrbracket$, we denote by $t_j$ the element $(0, \ldots, 0, t, 0, \ldots, 0)$ of $\mathbb{R}^n$, the real number $t$ being in position $j$.
Deduce that $$\lim_{t \to 0} \frac{\operatorname{det}\left(f(t_1), \ldots, f(t_n)\right)}{\operatorname{det}\left(t_1, \ldots, t_n\right)} = \mathrm{jac}_f(0)$$
grandes-ecoles 2014 QIC3 Determinant and Rank Computation View
Let $f$ denote a function of class $C^1$ from $\mathbb{R}^n$ to $\mathbb{R}^n$ satisfying $f(0) = 0$. For $t$ real and $j$ an integer in $\llbracket 1, n \rrbracket$, we denote by $t_j$ the element $(0, \ldots, 0, t, 0, \ldots, 0)$ of $\mathbb{R}^n$, the real number $t$ being in position $j$.
In the case $n = 2$ (respectively $n = 3$), give a geometric interpretation of the absolute value of the Jacobian of $f$ at 0 using areas of parallelograms (respectively volumes of parallelepipeds).
grandes-ecoles 2014 QIIA Linear Transformation and Endomorphism Properties View
We denote by $A$ a real square matrix of size 2 and we set, for all $x$ in $\mathbb{R}^2$, $f(x) = Ax$.
For $x$ in $\mathbb{R}^2$, express $\operatorname{div}_f(x)$ using only $A$.
grandes-ecoles 2014 QIVA1 Matrix Algebra and Product Properties View
Let $f$ be a function of class $C^2$ from $\mathbb{R}^n$ to itself. We consider the proposition $(\mathcal{P})$: for all $x$ in $\mathbb{R}^n$, the Jacobian matrix $J_f(x)$ of $f$ is orthogonal.
For $x$ in $\mathbb{R}^n$ and $i$, $j$, $k$ in $\llbracket 1, n \rrbracket$, we denote $$\alpha_{i,j,k}(x) = \sum_{p=1}^n \frac{\partial f_p}{\partial x_i}(x) \cdot \frac{\partial^2 f_p}{\partial x_j \partial x_k}(x)$$
We assume $(\mathcal{P})$. Show that for all $i$, $j$ and $k$ in $\llbracket 1, n \rrbracket$, $\alpha_{i,j,k} = \alpha_{i,k,j} = -\alpha_{k,j,i}$.
grandes-ecoles 2014 QIVA2 Matrix Algebra and Product Properties View
Let $f$ be a function of class $C^2$ from $\mathbb{R}^n$ to itself. We consider the proposition $(\mathcal{P})$: for all $x$ in $\mathbb{R}^n$, the Jacobian matrix $J_f(x)$ of $f$ is orthogonal.
For $x$ in $\mathbb{R}^n$ and $i$, $j$, $k$ in $\llbracket 1, n \rrbracket$, we denote $$\alpha_{i,j,k}(x) = \sum_{p=1}^n \frac{\partial f_p}{\partial x_i}(x) \cdot \frac{\partial^2 f_p}{\partial x_j \partial x_k}(x)$$
We assume $(\mathcal{P})$. Deduce that for all $i$, $j$ and $k$ in $\llbracket 1, n \rrbracket$, $\alpha_{i,j,k} = 0$.
grandes-ecoles 2014 QIVA3 Linear Transformation and Endomorphism Properties View
Let $f$ be a function of class $C^2$ from $\mathbb{R}^n$ to itself. We consider the proposition $(\mathcal{P})$: for all $x$ in $\mathbb{R}^n$, the Jacobian matrix $J_f(x)$ of $f$ is orthogonal.
For $x$ in $\mathbb{R}^n$ and $i$, $j$, $k$ in $\llbracket 1, n \rrbracket$, we denote $$\alpha_{i,j,k}(x) = \sum_{p=1}^n \frac{\partial f_p}{\partial x_i}(x) \cdot \frac{\partial^2 f_p}{\partial x_j \partial x_k}(x)$$
We assume $(\mathcal{P})$. Show that there exist an orthogonal matrix $A$ and an element $b$ of $\mathbb{R}^n$ such that, for all $x$ in $\mathbb{R}^n$, $f(x) = Ax + b$.
One may interpret the relations $\alpha_{i,j,k} = 0$ using matrix products.