Let $n \geqslant 1$ be a natural integer, and let $(X_1, \ldots, X_n)$ be discrete real random variables that are mutually independent such that, for all $k \in \{1, \ldots, n\}$, $$P[X_k = 1] = P[X_k = -1] = \frac{1}{2}$$ We define $$S_n = \frac{1}{n} \sum_{k=1}^{n} X_k$$ as well as, for all $\lambda \in \mathbb{R}$, $$\psi(\lambda) = \log\left(\frac{1}{2}e^{\lambda} + \frac{1}{2}e^{-\lambda}\right)$$ For each $\lambda \geqslant 0$, we set $$m(\lambda) = \frac{E[X_1 \exp(\lambda X_1)]}{E[\exp(\lambda X_1)]}$$ as well as $$D_n(\lambda) = \exp(\lambda n S_n - n \psi(\lambda))$$ For all $n \geqslant 1$, $\lambda \geqslant 0$ and $\varepsilon > 0$, we denote by $I_n(\lambda, \varepsilon)$ the random variable defined by $$I_n(\lambda, \varepsilon) = \begin{cases} 1 & \text{if } |S_n - m(\lambda)| \leqslant \varepsilon \\ 0 & \text{otherwise.} \end{cases}$$
(a) Deduce, for each $\lambda \geqslant 0$ and $\varepsilon > 0$, the existence of a sequence $(u_n(\varepsilon))_{n \geqslant 1}$ that tends to 0 as $n$ tends to infinity and such that $$\frac{1}{n} \log P[S_n \geqslant m(\lambda) - \varepsilon] \geqslant \psi(\lambda) - \lambda m(\lambda) - \lambda \varepsilon + u_n(\varepsilon)$$
(b) Conclude that for all $t \in [0,1[$, $$\lim_{n \rightarrow \infty} \frac{1}{n} \log P[S_n \geqslant t] = \inf_{\lambda \geqslant 0} (\psi(\lambda) - \lambda t).$$
(c) Is the preceding formula still valid for $t = 1$?
Let $n \geqslant 1$ be a natural integer, and let $(X_1, \ldots, X_n)$ be discrete real random variables that are mutually independent such that, for all $k \in \{1, \ldots, n\}$,
$$P[X_k = 1] = P[X_k = -1] = \frac{1}{2}$$
We define
$$S_n = \frac{1}{n} \sum_{k=1}^{n} X_k$$
as well as, for all $\lambda \in \mathbb{R}$,
$$\psi(\lambda) = \log\left(\frac{1}{2}e^{\lambda} + \frac{1}{2}e^{-\lambda}\right)$$
For each $\lambda \geqslant 0$, we set
$$m(\lambda) = \frac{E[X_1 \exp(\lambda X_1)]}{E[\exp(\lambda X_1)]}$$
as well as
$$D_n(\lambda) = \exp(\lambda n S_n - n \psi(\lambda))$$
For all $n \geqslant 1$, $\lambda \geqslant 0$ and $\varepsilon > 0$, we denote by $I_n(\lambda, \varepsilon)$ the random variable defined by
$$I_n(\lambda, \varepsilon) = \begin{cases} 1 & \text{if } |S_n - m(\lambda)| \leqslant \varepsilon \\ 0 & \text{otherwise.} \end{cases}$$

(a) Deduce, for each $\lambda \geqslant 0$ and $\varepsilon > 0$, the existence of a sequence $(u_n(\varepsilon))_{n \geqslant 1}$ that tends to 0 as $n$ tends to infinity and such that
$$\frac{1}{n} \log P[S_n \geqslant m(\lambda) - \varepsilon] \geqslant \psi(\lambda) - \lambda m(\lambda) - \lambda \varepsilon + u_n(\varepsilon)$$

(b) Conclude that for all $t \in [0,1[$,
$$\lim_{n \rightarrow \infty} \frac{1}{n} \log P[S_n \geqslant t] = \inf_{\lambda \geqslant 0} (\psi(\lambda) - \lambda t).$$

(c) Is the preceding formula still valid for $t = 1$?