Not Maths

All Questions
(a) Let $a$ and $b$ be in $[0, +\infty[$ such that $a < b$. Show that there exists $\epsilon \in ]0, b]$ such that $\varphi(a + t) + \varphi(b - t) > \varphi(a) + \varphi(b)$ for all $t > 0$ such that $t \leqslant \epsilon$.
(b) Deduce that $H_{N}$ attains its maximum on $\Sigma_{N}$ at a unique point which you will determine.
Where $\varphi(t) = \begin{cases} 0 & \text{if } t = 0 \\ -t \ln(t) & \text{otherwise,} \end{cases}$ $\Sigma_{N}$ is the set of vectors $p \in \mathbb{R}^{N}$ such that $\sum_{i=1}^{N} p_{i} = 1$ and $p_{i} \geqslant 0$ for all $1 \leqslant i \leqslant N$, and $H_{N}(p) = \sum_{i=1}^{N} \varphi(p_{i})$.
We denote by $\Sigma_{\infty}$ the set of sequences of real numbers $p = (p_{i})_{i \geqslant 1}$ such that $p_{i} \geqslant 0$ for all $i \geqslant 1$ and $\sum_{i=1}^{+\infty} p_{i} = 1$. We denote by $H_{\infty}$ the function on $\Sigma_{\infty}$ defined by $H_{\infty}(p) = \sum_{i=1}^{\infty} \varphi(p_{i})$ taking values in $\mathbb{R}_{+} \cup \{+\infty\}$, where $\varphi(t) = \begin{cases} 0 & \text{if } t = 0 \\ -t \ln(t) & \text{otherwise.} \end{cases}$
(a) We consider $a \in ]0,1[$ and $p_{i} = a(1-a)^{i-1}$ for $i \geqslant 1$. Calculate $H_{\infty}(p)$ and study its variations as a function of $a$.
(b) Show that there exists $p \in \Sigma_{\infty}$ such that $H_{\infty}(p) = +\infty$. (Hint: You may use without proof that the series with general term $n^{-1} \ln(n)^{-\beta}$ for $n \geqslant 2$ converges if and only if $\beta > 1$).
Let $\mathcal{S}$ be the simplex with vertices $s_0, s_1, \ldots, s_n$, defined by $$\mathcal{S} = \left\{\sum_{i=0}^n t_i s_i \mid \forall i = 0,\ldots,n,\, t_i \geqslant 0,\, \sum_{i=0}^n t_i = 1\right\}.$$ The volume of $\mathcal{S}$ is defined by $\operatorname{Vol}(\mathcal{S}) := \frac{1}{n!}\left|\det(s_1 - s_0, s_2 - s_0, \ldots, s_n - s_0)\right|$.
7a. Show that $\mathcal{S}$ is a compact convex set in $\mathbb{R}^n$.
7b. Show that $\mathring{\mathcal{S}} = \left\{\sum_{i=0}^n t_i s_i \mid \forall i = 0,\ldots,n,\, t_i > 0,\, \sum_{i=0}^n t_i = 1\right\}$. Deduce that if $0 \in \mathring{\mathcal{S}}$, then for all $\lambda \in [0,1[$, $\lambda \mathcal{S} \subset \mathring{\mathcal{S}}$.
7c. For $i = 0, \ldots, n$, we denote $\hat{s}_i = (1, s_i)$ the point of $\mathbb{R}^{n+1}$ whose coordinates are 1 followed by the coordinates of $s_i$. Express $\left|\det(\hat{s}_0, \hat{s}_1, \ldots, \hat{s}_n)\right|$ as a function of $\operatorname{Vol}(\mathcal{S})$. Deduce that the volume of a simplex does not depend on the order of the vertices.
We consider a function $f : \mathbb{R}^n \rightarrow \mathbb{R}$. a. For all $x, y \in \mathbb{R}^n$, let $\varphi_{x,y} : \mathbb{R} \rightarrow \mathbb{R}$ be the function defined by $\varphi_{x,y}(t) = f(x + t(y-x))$ for all $t \in \mathbb{R}$. Show that $f$ is convex if and only if for all $x, y \in \mathbb{R}^n$, $\varphi_{x,y}$ is convex. b. Suppose that $f$ is differentiable on $\mathbb{R}^n$. Show that for all $x, y \in \mathbb{R}^n$, the function $\varphi_{x,y}$ is differentiable, and show that for all $t \in \mathbb{R}$, $\varphi_{x,y}^{\prime}(t) = \langle \nabla f(x + t(y-x)), y-x \rangle$. c. Deduce that if $f$ is differentiable on $\mathbb{R}^n$, then $f$ is convex if and only if for all $x, y \in \mathbb{R}^n$, $$f(y) \geqslant f(x) + \langle \nabla f(x), y-x \rangle.$$ d. Show that if $f : \mathbb{R}^n \rightarrow \mathbb{R}$ is differentiable on $\mathbb{R}^n$, then $f$ is convex if and only if for all $x, y \in \mathbb{R}^n$, $$\langle \nabla f(y) - \nabla f(x), y-x \rangle \geqslant 0.$$
Let $f : \mathbb{R}^n \rightarrow \mathbb{R}$ be a convex function differentiable on $\mathbb{R}^n$, and $x^{\star} \in \mathbb{R}^n$. Show that if $\nabla f(x^{\star}) = 0$ then $f$ admits a global minimum at $x^{\star}$.
We consider a real number $\alpha \in \mathbb{R}_+^{\star}$ and a function $f : \mathbb{R}^n \rightarrow \mathbb{R}$ differentiable on $\mathbb{R}^n$. Recall that $f$ is $\alpha$-convex if for all $x, y \in \mathbb{R}^n$, $$f(y) \geqslant f(x) + \langle \nabla f(x), y-x \rangle + \frac{\alpha}{2}\|y-x\|^2.$$ a. We consider the function $g_{\alpha} : \mathbb{R}^n \rightarrow \mathbb{R}$ defined by $g_{\alpha}(x) = f(x) - \frac{\alpha}{2}\|x\|^2$ for all $x \in \mathbb{R}^n$. Calculate $\nabla g_{\alpha}(x)$ for all $x \in \mathbb{R}^n$, and show that $f$ is $\alpha$-convex if and only if $g_{\alpha}$ is convex. b. Deduce that $f$ is $\alpha$-convex if and only if for all $x, y \in \mathbb{R}^n$, $$\langle \nabla f(y) - \nabla f(x), y-x \rangle \geqslant \alpha \|y-x\|^2.$$
Let $f : \mathbb{R}^n \rightarrow \mathbb{R}$ be a continuous and coercive function. Show that if $K$ is a non-empty closed set of $\mathbb{R}^n$, then there exists $x^{\star} \in K$ such that $f(x^{\star}) = \inf_{x \in K} f(x)$.
Let $f : \mathbb{R}^n \rightarrow \mathbb{R}$ be a function differentiable on $\mathbb{R}^n$ and $\alpha$-convex, where $\alpha \in \mathbb{R}_+^{\star}$. Show that if $K$ is a non-empty closed convex set of $\mathbb{R}^n$, then $f$ admits a unique minimum on $K$.
Let $C$ be a non-empty closed convex set of $\mathbb{R}^n$ and $x \in \mathbb{R}^n$. a. Show that there exists a unique point $P_C(x) \in C$ such that $\|P_C(x) - x\| = \inf_{y \in C} \|y - x\|$. b. Let $\bar{x} \in C$. Show that $\bar{x} = P_C(x)$ if and only if $$\langle x - \bar{x}, y - \bar{x} \rangle \leqslant 0 \text{ for all } y \in C.$$ Hint: one may consider the function $\psi_y : t \in \mathbb{R} \mapsto \|x - (\bar{x} + t(y - \bar{x}))\|^2$, where $y \in C$. c. Deduce that if $x, y \in \mathbb{R}^n$, then $\|P_C(y) - P_C(x)\| \leqslant \|y - x\|$.
Describe $\mathcal{A}_K(x)$ in the case where $x$ is in the interior of $K$.
Show that if $f : \mathbb{R}^n \rightarrow \mathbb{R}$ is differentiable at $x^{\star} \in K$ and admits a local minimum on $K$ at $x^{\star}$, then $$\forall h \in \mathcal{A}_K(x^{\star}), \quad \langle \nabla f(x^{\star}), h \rangle \geqslant 0.$$ What does this result express in the particular case where $x^{\star}$ is in the interior of $K$?
Throughout Part II, we consider a function $f : \mathbb{R}^n \rightarrow \mathbb{R}$ differentiable on $\mathbb{R}^n$ and $\alpha$-convex, where $\alpha \in \mathbb{R}_+^{\star}$. We set $$K = \left\{ x \in \mathbb{R}^n, g_1(x) \leqslant 0, \ldots, g_p(x) \leqslant 0 \right\}$$ where $p \in \mathbb{N}^{\star}$ and $g_1, \ldots, g_p$ are convex functions from $\mathbb{R}^n$ to $\mathbb{R}$, differentiable on $\mathbb{R}^n$. We further assume that the set $K$ is non-empty. Show that there exists a unique element $x^{\star} \in K$ such that $f(x^{\star}) = \inf_{x \in K} f(x)$.
For all $k \in \mathbb{N}$, we introduce the function $f_k : \mathbb{R}^n \rightarrow \mathbb{R}$ defined by $$f_k(x) = f(x) + k\Psi(x) \text{ for all } x \in \mathbb{R}^n$$ where $\Psi : \mathbb{R}^n \rightarrow \mathbb{R}$ is the function defined by $\Psi(x) = \sum_{i=1}^{p} \max(0, g_i(x))^2$ for all $x \in \mathbb{R}^n$. For all $x \in \mathbb{R}^n$, calculate $\lim_{k \rightarrow \infty} f_k(x)$.
For all $k \in \mathbb{N}$, we introduce the function $f_k : \mathbb{R}^n \rightarrow \mathbb{R}$ defined by $$f_k(x) = f(x) + k\Psi(x) \text{ for all } x \in \mathbb{R}^n$$ where $\Psi : \mathbb{R}^n \rightarrow \mathbb{R}$ is the function defined by $\Psi(x) = \sum_{i=1}^{p} \max(0, g_i(x))^2$ for all $x \in \mathbb{R}^n$. Show that for all $k \in \mathbb{N}$, there exists a unique $x_k \in \mathbb{R}^n$ such that $f_k(x_k) = \inf_{x \in \mathbb{R}^n} f_k(x)$. Hint: one may begin by showing that if $g : \mathbb{R}^n \rightarrow \mathbb{R}$ is a convex function, and $h : \mathbb{R} \rightarrow \mathbb{R}$ is a convex increasing function, then $h \circ g$ is convex.
For all $k \in \mathbb{N}$, we introduce the function $f_k : \mathbb{R}^n \rightarrow \mathbb{R}$ defined by $$f_k(x) = f(x) + k\Psi(x) \text{ for all } x \in \mathbb{R}^n$$ where $\Psi : \mathbb{R}^n \rightarrow \mathbb{R}$ is the function defined by $\Psi(x) = \sum_{i=1}^{p} \max(0, g_i(x))^2$ for all $x \in \mathbb{R}^n$, and $x_k$ denotes the unique minimizer of $f_k$ on $\mathbb{R}^n$, and $x^\star$ the unique minimizer of $f$ on $K$. Show that for all $k \in \mathbb{N}$, $f(x_k) \leqslant f(x^{\star})$.
For all $k \in \mathbb{N}$, we introduce the function $f_k : \mathbb{R}^n \rightarrow \mathbb{R}$ defined by $$f_k(x) = f(x) + k\Psi(x) \text{ for all } x \in \mathbb{R}^n$$ where $\Psi : \mathbb{R}^n \rightarrow \mathbb{R}$ is the function defined by $\Psi(x) = \sum_{i=1}^{p} \max(0, g_i(x))^2$ for all $x \in \mathbb{R}^n$, and $x_k$ denotes the unique minimizer of $f_k$ on $\mathbb{R}^n$, and $x^\star$ the unique minimizer of $f$ on $K$. We consider a subsequence $(x_{\varphi(k)})_{k \in \mathbb{N}}$ of $(x_k)_{k \in \mathbb{N}}$ that converges to $\bar{x} \in \mathbb{R}^n$. a. Show that $\bar{x} \in K$. b. Deduce that $\bar{x} = x^{\star}$.
For all $k \in \mathbb{N}$, we introduce the function $f_k : \mathbb{R}^n \rightarrow \mathbb{R}$ defined by $$f_k(x) = f(x) + k\Psi(x) \text{ for all } x \in \mathbb{R}^n$$ where $\Psi : \mathbb{R}^n \rightarrow \mathbb{R}$ is the function defined by $\Psi(x) = \sum_{i=1}^{p} \max(0, g_i(x))^2$ for all $x \in \mathbb{R}^n$, and $x_k$ denotes the unique minimizer of $f_k$ on $\mathbb{R}^n$, and $x^\star$ the unique minimizer of $f$ on $K$. Deduce that the sequence $(x_k)_{k \in \mathbb{N}}$ converges to $x^{\star}$.
For all $k \in \mathbb{N}$, we introduce the function $f_k : \mathbb{R}^n \rightarrow \mathbb{R}$ defined by $$f_k(x) = f(x) + k\Psi(x) \text{ for all } x \in \mathbb{R}^n$$ where $\Psi : \mathbb{R}^n \rightarrow \mathbb{R}$ is the function defined by $\Psi(x) = \sum_{i=1}^{p} \max(0, g_i(x))^2$ for all $x \in \mathbb{R}^n$, and $x_k$ denotes the unique minimizer of $f_k$ on $\mathbb{R}^n$, and $x^\star$ the unique minimizer of $f$ on $K$. Show that the sequence $(f_k(x_k))_{k \in \mathbb{N}}$ converges to $f(x^{\star})$.
Let $m \in \mathbb{N}^{\star}$ and $(u_1, \ldots, u_m)$ be a family of vectors of $\mathbb{R}^n$. We denote $$C = \left\{ \sum_{i=1}^{m} \mu_i u_i, \mu_i \geqslant 0 \; \forall i \in \llbracket 1, m \rrbracket \right\}.$$ The purpose of this question is to show that $C$ is a closed convex set of $\mathbb{R}^n$. a. Show that $C$ is convex. b. Show that if $(u_1, \ldots, u_m)$ is a free family, then $C$ is closed. c. For all $I \subset \llbracket 1, m \rrbracket$, we set $C_I = \left\{ \sum_{i \in I} \mu_i u_i, \mu_i \geqslant 0 \; \forall i \in I \right\}$. Show that $$C = \bigcup_{I} C_I,$$ where the union is taken over the sets $I \subset \llbracket 1, m \rrbracket$ such that $(u_i)_{i \in I}$ is a free family. Deduce that $C$ is closed.
Let $m \in \mathbb{N}^{\star}$ and $(u_1, \ldots, u_m)$ be a family of vectors of $\mathbb{R}^n$. We denote $$C = \left\{ \sum_{i=1}^{m} \mu_i u_i, \mu_i \geqslant 0 \; \forall i \in \llbracket 1, m \rrbracket \right\}.$$ We consider a vector $v \in \mathbb{R}^n \setminus C$. a. Show that $\langle P_C(v), P_C(v) - v \rangle = 0$. b. We set $w = P_C(v) - v$. Show that $\langle v, w \rangle < 0$ and $\langle u_i, w \rangle \geqslant 0$ for all $i \in \llbracket 1, m \rrbracket$.
Let $m \in \mathbb{N}^{\star}$ and $(u_1, \ldots, u_m)$ be a family of vectors of $\mathbb{R}^n$. We denote $$C = \left\{ \sum_{i=1}^{m} \mu_i u_i, \mu_i \geqslant 0 \; \forall i \in \llbracket 1, m \rrbracket \right\}.$$ Lemma 1 states: If $v \in \mathbb{R}^n$, then one and only one of the following two assertions is verified: (i) $v \in C$, (ii) there exists $w \in \mathbb{R}^n$ such that $\langle v, w \rangle < 0$ and $\langle u_i, w \rangle \geqslant 0$ for all $i \in \llbracket 1, m \rrbracket$. Conclude the proof of Lemma 1.
Let $p \in \mathbb{N}^{\star}$. We assume that $f, g_1, \ldots, g_p$ are functions from $\mathbb{R}^n$ to $\mathbb{R}$ differentiable on $\mathbb{R}^n$, and that $$K = \left\{ x \in \mathbb{R}^n, g_1(x) \leqslant 0, \ldots, g_p(x) \leqslant 0 \right\}$$ is non-empty. For all $x \in K$, we denote $I_x = \left\{ i \in \llbracket 1, p \rrbracket, g_i(x) = 0 \right\}$. Show that for all $x \in K$, $$\mathcal{A}_K(x) \subset \left\{ h \in \mathbb{R}^n, \forall i \in I_x, \langle \nabla g_i(x), h \rangle \leqslant 0 \right\}.$$
Let $p \in \mathbb{N}^{\star}$. We assume that $f, g_1, \ldots, g_p$ are functions from $\mathbb{R}^n$ to $\mathbb{R}$ differentiable on $\mathbb{R}^n$, and that $$K = \left\{ x \in \mathbb{R}^n, g_1(x) \leqslant 0, \ldots, g_p(x) \leqslant 0 \right\}$$ is non-empty. For all $x \in K$, we denote $I_x = \left\{ i \in \llbracket 1, p \rrbracket, g_i(x) = 0 \right\}$. We consider $x^{\star} \in K$ and we make the following hypothesis: $$(H) \quad \text{there exists } v \in \mathbb{R}^n \text{ such that for all } i \in I_{x^{\star}}, \langle \nabla g_i(x^{\star}), v \rangle < 0.$$ Show that $\mathcal{A}_K(x^{\star}) = \left\{ h \in \mathbb{R}^n, \forall i \in I_{x^{\star}}, \langle \nabla g_i(x^{\star}), h \rangle \leqslant 0 \right\}$.
Let $p \in \mathbb{N}^{\star}$. We assume that $f, g_1, \ldots, g_p$ are functions from $\mathbb{R}^n$ to $\mathbb{R}$ differentiable on $\mathbb{R}^n$, and that $$K = \left\{ x \in \mathbb{R}^n, g_1(x) \leqslant 0, \ldots, g_p(x) \leqslant 0 \right\}$$ is non-empty. For all $x \in K$, we denote $I_x = \left\{ i \in \llbracket 1, p \rrbracket, g_i(x) = 0 \right\}$. Show that if $x^{\star} \in K$ is such that $(\nabla g_i(x^{\star}))_{i \in I_{x^{\star}}}$ forms a free family, then hypothesis $(H)$ is verified.
Let $p \in \mathbb{N}^{\star}$. We assume that $f, g_1, \ldots, g_p$ are functions from $\mathbb{R}^n$ to $\mathbb{R}$ differentiable on $\mathbb{R}^n$, and that $$K = \left\{ x \in \mathbb{R}^n, g_1(x) \leqslant 0, \ldots, g_p(x) \leqslant 0 \right\}$$ is non-empty. For all $x \in K$, we denote $I_x = \left\{ i \in \llbracket 1, p \rrbracket, g_i(x) = 0 \right\}$. Suppose that $f$ attains at $x^{\star} \in K$ a local minimum on $K$, and that hypothesis $(H)$ is verified. Show that there exist non-negative real numbers $\mu_1^{\star}, \ldots, \mu_p^{\star}$ such that $$\left\{ \begin{array}{l} \nabla f(x^{\star}) + \sum_{i=1}^{p} \mu_i^{\star} \nabla g_i(x^{\star}) = 0 \\ \mu_i^{\star} g_i(x^{\star}) = 0 \text{ for all } i \in \llbracket 1, p \rrbracket. \end{array} \right.$$