Fall 2018.1 #real_analysis/qual/completed
Let \(f(x) = \frac 1 x\). Show that \(f\) is uniformly continuous on \((1, \infty)\) but not on \((0,\infty)\).
-
Uniform continuity: \begin{align*} \forall \varepsilon>0, \exists \delta({\varepsilon})>0 \quad\text{such that}\quad {\left\lvert {x-y} \right\rvert}<\delta \implies {\left\lvert {f(x) - f(y)} \right\rvert} < {\varepsilon} .\end{align*}
-
Negating uniform continuity: \(\exists {\varepsilon}> 0\) such that \(\forall \delta({\varepsilon})\) there exist \(x, y\) such that \({\left\lvert {x-y} \right\rvert} < \delta\) and \({\left\lvert {f(x) - f(y)} \right\rvert} > {\varepsilon}\).
-
Archimedean property: for all \(x,y\in {\mathbb{R}}\) there exists an \(n \in {\mathbb{N}}\) such that \(nx>y\). Take \(x={\varepsilon}, y=1\), so \(n{\varepsilon}> 1\) and \({1\over n} < {\varepsilon}\).
1 is the only constant around, so try to use it for uniform continuity. To negate, find a bad \(x\): since \(1/x\) blows up near zero, go hunting for small \(x\)s!
-
Claim: \(f(x) = \frac 1 x\) is uniformly continuous on \((c, \infty)\) for any \(c > 0\).
-
Note that \begin{align*} {\left\lvert {x} \right\rvert}, {\left\lvert {y} \right\rvert} > c > 0 \implies {\left\lvert {xy} \right\rvert} = {\left\lvert {x} \right\rvert}{\left\lvert {y} \right\rvert} > c^2 \implies \frac{1}{{\left\lvert {xy} \right\rvert}} < \frac 1 {c^{2}} .\end{align*}
-
Letting \(\varepsilon\) be arbitrary, choose \(\delta < \varepsilon c^2\).
- Note that \(\delta\) does not depend on \(x, y\).
-
Then \begin{align*} {\left\lvert {f(x) - f(y)} \right\rvert} &= {\left\lvert {\frac 1 x - \frac 1 y} \right\rvert} \\ &= \frac{{\left\lvert {x-y} \right\rvert}}{xy} \\ &\leq \frac{\delta}{xy} \\ &< \frac{\delta}{c^2} \\ &< \varepsilon .\end{align*}
-
-
Claim: \(f\) is not uniformly continuous when \(c=0\).
- Take \(\varepsilon < 1\), and let \(\delta = \delta({\varepsilon})\) be arbitrary.
- Let \(x_n = \frac 1 n\) for \(n\geq 1\).
- Choose \(n\) large enough such that \({1\over n} < \delta\)
- Then a computation: \begin{align*} {\left\lvert {x_n - x_{n+1}} \right\rvert} &= \frac 1 n - \frac 1 {n+1} \\ &= {1\over n(n+1) } \\ &< {1\over n} \\ &< \delta ,\end{align*}
- Why this can be done: by the Archimedean property of \({\mathbb{R}}\), for any \(\delta\in {\mathbb{R}}\), one can choose choose \(n\) such that \(n\delta > 1\). We’ve also used that \(n+1 > 1\) so \({1\over n+1}< 1\)
- Note that \(f(x_n) = n\), so \begin{align*} {\left\lvert {f(x_{n+1}) - f(x_{n})} \right\rvert} = (n+1) - n = 1 > \varepsilon .\end{align*}
Fall 2017.1 #real_analysis/qual/completed
Let \begin{align*} f(x) = \sum _{n=0}^{\infty} \frac{x^{n}}{n !}. \end{align*} Describe the intervals on which \(f\) does and does not converge uniformly.
-
\(f_N\to f\) uniformly \(\iff\) \({\left\lVert {f_N - f} \right\rVert}_\infty \to 0\).
- Applied to sums: \begin{align*} \sum_{0 \leq k\leq N} f_n \overset{u}\to \sum_{k\geq 0} f_n \iff {\left\lVert {\sum_{k\geq N+1} f_n } \right\rVert}_{\infty} \to 0 .\end{align*}
- An infinite sum is defined as the pointwise limit of its partial sums: \begin{align*} \sum_{n=0}^\infty c_n x^n \coloneqq\lim_{N\to \infty} \sum_{n=0}^N c_n x^n .\end{align*}
- Uniformly decaying terms for uniformly convergent series: if \(\sum_{n=0}^\infty f_n(x)\) converges uniformly on a set \(A\), then \begin{align*} {\left\lVert {f_n} \right\rVert}_{\infty, A} \coloneqq\sup_{x\in A} {\left\lvert {f_n(x)} \right\rvert} \overset{n\to\infty}\longrightarrow 0 .\end{align*}
-
\(M{\hbox{-}}\)test: if \(f_n:A \to{\mathbb{C}}\) with \({\left\lVert {f_n} \right\rVert}_\infty < M_n\) and \(\sum M_n < \infty\), then \(\sum f_n\) converges uniformly and absolutely.
- If the \(f_n\) are continuous, the uniform limit theorem implies \(\sum f_n\) is also continuous.
No real place to start, so pick the nicest place: compact intervals. Then bounded intervals, then unbounded sets.
-
Set \(f_N(x) = \sum_{n=1}^N {x^n \over n!}\).
- Then by definition, \(f_N(x) \to f(x)\) pointwise on \({\mathbb{R}}\).
-
Claim: \(f_N\) converges on compact intervals
-
For any compact interval \([-M, M]\), we have
\begin{align*}
{\left\lVert {f_N(x) - f(x)} \right\rVert}_\infty
&= \sup_{x\in [-M, M] } ~{\left\lvert {\sum_{n=N+1}^\infty {x^n \over {n!}} } \right\rvert} \\
&\leq \sup_{x\in [-M, M] } ~\sum_{n=N+1}^\infty {\left\lvert { {x^n \over {n!}} } \right\rvert} \\
&\leq \sum_{n=N+1}^\infty {M^n \over n!} \\
&\leq \sum_{n=0}^\infty {M^n \over {n!} } \quad\text{since all additional terms are positive} \\
&= e^M \\
&<\infty
,\end{align*}
so \(f_N \to f\) uniformly on \([-M, M]\) by the M-test.
- Note: we’ve used that this power series converges to \(e^x\) pointwise everywhere.
-
For any compact interval \([-M, M]\), we have
\begin{align*}
{\left\lVert {f_N(x) - f(x)} \right\rVert}_\infty
&= \sup_{x\in [-M, M] } ~{\left\lvert {\sum_{n=N+1}^\infty {x^n \over {n!}} } \right\rvert} \\
&\leq \sup_{x\in [-M, M] } ~\sum_{n=N+1}^\infty {\left\lvert { {x^n \over {n!}} } \right\rvert} \\
&\leq \sum_{n=N+1}^\infty {M^n \over n!} \\
&\leq \sum_{n=0}^\infty {M^n \over {n!} } \quad\text{since all additional terms are positive} \\
&= e^M \\
&<\infty
,\end{align*}
so \(f_N \to f\) uniformly on \([-M, M]\) by the M-test.
-
This argument shows that \(f\) converges on any bounded set.
-
Claim: \(f_N\) does not converge uniformly on all of \({\mathbb{R}}\).
-
Uniformly convergent sums have uniformly decaying terms: \begin{align*} \sum_{n\leq N} g_n \overset{N\to\infty}\longrightarrow\sum g_n \text{ uniformly on } A \implies {\left\lVert {g_n} \right\rVert}_{\infty, A} \coloneqq\sup_{x\in A} {\left\lvert {g_n(x)} \right\rvert} \overset{n\to\infty}\longrightarrow 0 .\end{align*}
-
Take \(B_N\) a ball of radius \(N\) about 0, then for \(N>1\), note that \(x=N\) on the boundary and so \begin{align*} {\left\lVert {x^k \over k!} \right\rVert}_{\infty, B_N} = {N^k \over k!} \overset{N\to\infty}\longrightarrow\infty .\end{align*}
-
-
Conclusion: \(f_N\) converges on any bounded \(A\subseteq {\mathbb{R}}\) but not on all of \({\mathbb{R}}\).
Spring 2017.4 #real_analysis/qual/completed
Let \(f(x, y)\) on \([-1, 1]^2\) be defined by \begin{align*} f(x, y) = \begin{cases} \frac{x y}{\left(x^{2}+y^{2}\right)^{2}} & (x, y) \neq (0, 0) \\ 0 & (x, y) = (0, 0) \end{cases} \end{align*} Determine if \(f\) is integrable.
- Just Calculus.
- \(1/r\) is not integrable on \((0, 1)\).
Switching to polar coordinates and integrating over the quarter of the unit disc \(D \cap Q_1 \subseteq I^2\) in quadrant 1, we have \begin{align*} \int_{I^2} f \, dA &\geq \int_D f \, dA \\ &= \int_0^{\pi/2} \int_0^1 \frac{r^2 \cos(\theta)\sin(\theta)}{r^4} ~r~\,dr\,d\theta\\ &= \int_0^{\pi/2} \int_0^1 \frac{\cos(\theta)\sin(\theta)}{r} \,dr\,d\theta\\ &= \qty{ \int_0^1 {1\over r } \,dr} \qty{ \int_0^{\pi/2} \cos(\theta)\sin(\theta) \,d\theta} \\ &= \qty{ \int_0^1 {1\over r } \,dr} \qty{ \int_0^{1} u \,du} && u=\sin(\theta)\\ &= {1\over 2}\qty{ \int_0^1 {1\over r } \,dr} \\ &\longrightarrow\infty .\end{align*}
Fall 2014.1 #real_analysis/qual/completed
Let \(\left\{{f_n}\right\}\) be a sequence of continuous functions such that \(\sum f_n\) converges uniformly.
Prove that \(\sum f_n\) is also continuous.
- The uniform limit theorem.
- \({\varepsilon}/3\) trick.
If \(F_N\to F\) uniformly with each \(F_N\) continuous, then \(F\) is continuous.
-
Follows from an \(\varepsilon/3\) argument: \begin{align*} {\left\lvert {F(x) - F(y} \right\rvert} \leq {\left\lvert {F(x) - F_N(x)} \right\rvert} + {\left\lvert {F_N(x) - F_N(y)} \right\rvert} + {\left\lvert {F_N(y) - F(y)} \right\rvert} \leq {\varepsilon}\to 0 .\end{align*}
- The first and last \({\varepsilon}/3\) come from uniform convergence of \(F_N\to F\).
- The middle \({\varepsilon}/3\) comes from continuity of each \(F_N\).
- Now setting \(F_N\coloneqq\sum_{n=1}^N f_n\) yields a finite sum of continuous functions, which is continuous.
- Each \(F_N\) is continuous and \(F_N\to F\) uniformly, so \(F\) is continuous.
Spring 2015.1 #real_analysis/qual/completed
Let \((X, d)\) and \((Y, \rho)\) be metric spaces, \(f: X\to Y\), and \(x_0 \in X\).
Prove that the following statements are equivalent:
- For every \(\varepsilon > 0 \quad \exists \delta > 0\) such that \(\rho( f(x), f(x_0) ) < \varepsilon\) whenever \(d(x, x_0) < \delta\).
- The sequence \(\left\{{f(x_n)}\right\}_{n=1}^\infty \to f(x_0)\) for every sequence \(\left\{{x_n}\right\} \to x_0\) in \(X\).
- What it means for a sequence to converge.
- Trading \(N\)s for \(\delta\)s.
-
Let \(\left\{{x_n}\right\} \overset{n\to\infty}\to x_0\) be arbitrary; we want to show \(\left\{{f(x_n)}\right\}\overset{n\to\infty}\to f(x_0)\).
- We thus want to show that for every \({\varepsilon}>0\), there exists an \(N({\varepsilon})\) such that \begin{align*}n\geq N({\varepsilon}) \implies \rho(f(x_n), f(x_0)) < {\varepsilon}.\end{align*}
- Let \({\varepsilon}>0\) be arbitrary, then by (1) choose \(\delta\) such that \(\rho(f(x), f(x_0)) < {\varepsilon}\) when \(d(x, x_0) < \delta\).
- Since \(x_n\to x\), there is some \(N\) such that \(n\geq N \implies d(x_n, x_0) < \delta\)
- Then for \(n\geq N\), \(d(x_n, x_0) < \delta\) and thus \(\rho(f(x_n), f(x_0)) < {\varepsilon}\), so \(f(x_n)\to f(x_0)\) by definition.
The direct implication is not a good idea here, since you need a handle on all \(x\) in a neighborhood of \(x_0\), not just a specific sequence.
- By contrapositive, show that \(\not 1\implies \not 2\).
- Need to show: if \(f\) is not \({\varepsilon}{\hbox{-}}\delta\) continuous at \(x_0\), then there exists a sequence \(x_n\to x_0\) where \(f(x_n)\not\to f(x_0)\).
- Negating \(1\), we have that there exists an \({\varepsilon}>0\) such that for all \(\delta\), there exists an \(x\) with \(d(x, x_0) < \delta\) but \(\rho(f(x), f(x_0))>{\varepsilon}\)
- So take a sequence of deltas \(\delta_n = {1\over n}\), apply this to produce a sequence \(x_n\) with \(d(x_n, x_0) < \delta_n \coloneqq{1\over n} \longrightarrow 0\) and \(\rho(f(x_n), f(x_0)) > {\varepsilon}\) for all \(n\).
- This yields a sequence \(x_n \to x_0\) where \(f(x_n) \not\to f(x_0)\).
Fall 2014.2 #real_analysis/qual/completed
Let \(I\) be an index set and \(\alpha: I \to (0, \infty)\).
-
Show that \begin{align*} \sum_{i \in I} a(i):=\sup _{\substack{ J \subset I \\ J \text { finite }}} \sum_{i \in J} a(i)<\infty \implies I \text{ is countable.} \end{align*}
-
Suppose \(I = {\mathbb{Q}}\) and \(\sum_{q \in \mathbb{Q}} a(q)<\infty\). Define \begin{align*} f(x):=\sum_{\substack{q \in \mathbb{Q}\\ q \leq x}} a(q). \end{align*} Show that \(f\) is continuous at \(x \iff x\not\in {\mathbb{Q}}\).
- Can always filter sets \(X\) with a function \(X\to {\mathbb{R}}\).
- Countable union of countable sets is still countable.
- Continuity: \(\lim_{y\to x} f(y) = f(x)\) from either side.
- Trick: pick enumerations of countable sets and reindex sums
- Set \(S \coloneqq\sum_{i\in I} \alpha(i)\), we will show that \(S<\infty \implies I\) is countable.
-
Write
\begin{align*}
I = \displaystyle\bigcup_{n\geq 0} S_n, &&
S_n \coloneqq\left\{{i\in I {~\mathrel{\Big\vert}~}\alpha(i) \geq {1\over n}}\right\}
.\end{align*}
- Note that \(S_n \subseteq S\) for all \(n\), so \(\sum_{i\in I}\alpha(i) \geq \sum_{i\in S_n} \alpha(i)\) for all \(n\).
- It suffices to show that \(S_n\) is countable, since \(I\) is a countable union of \(S_n\).
- There is an inequality \begin{align*} \infty &> S \coloneqq\sum_{i\in I} \alpha(i) \\ &\geq \sum_{i\in S_n} \alpha(i) \\ &\geq \sum_{i\in S_n} {1\over n} \\ &= {1\over n} \sum_{i\in S_n} 1 \\ &= \qty{1\over n} # S_n \\ \\ \implies \infty &> n S \geq # S_n .\end{align*}
-
We’ll prove something more general: let \(Q = \left\{{q_k}\right\}\) be countable and \(\left\{{\alpha_k \coloneqq\alpha(q_k)}\right\}\) be summable, and define \begin{align*} f(x) \coloneqq\sum_{q_k\leq x} \alpha_k .\end{align*}
-
\(f\) is always discontinuous precisely on the countable set \(Q\) and continuous on \({\mathbb{R}}\setminus Q\).
-
\(f\) is always left-continuous, is right-continuous at \(x\in{\mathbb{R}}\setminus Q\), and not right-continuous at \(x\in Q\)
-
\(f\) has jump discontinuities at every \(q_m\), where the jump is precisely \(\alpha_m\).
-
-
This follows from computing the left and right limits: \begin{align*} f(x^+) &= \lim_{h\to 0} \sum_{q_k \leq x+h} \alpha_k = \sum_{q_k\leq x} \alpha_k = \sum_{q_k < x} \alpha_k + \sum_{q_k = x} \alpha_k \\ f(x^-) &= \lim_{h\to 0} \sum_{q_k \leq x-h} \alpha_k = \sum_{q_k < x} \alpha_k ,\end{align*} where we’ve used that \(\left\{{q_k \leq x}\right\} = \left\{{q_k < x}\right\} {\textstyle\coprod}\left\{{x}\right\}\) in the first equality.
-
Then if \(x=q_m\) for some \(m\), \begin{align*} f(x^+) &= f(q_m^+) = \sum_{q_k < q_m} \alpha_k + \alpha_m \\ f(x^-) &= f(a_m^-) = \sum_{q_k< q_m} \alpha_k ,\end{align*} which clearly differ if \(\alpha_m \neq 0\).
-
Taking \(x\not\in Q\), we have \(\left\{{q_k \leq x}\right\} = \left\{{q_k < x}\right\}\), since \(\left\{{q_k=x}\right\} = \emptyset\), so \begin{align*} f(x^+) &= \sum_{q_k\leq x} \alpha_k = \sum_{q_k < x} \alpha_k \\ f(x^-) &= \sum_{q_k< x} \alpha_k ,\end{align*} so the limits agree.
-
To recover the result in the problem, let \({\mathbb{Q}}= \left\{{q_k}\right\}\) be any enumeration of the rationals.
General Analysis
Fall 2021.1 #real_analysis/qual/completed
Let \(\left\{x_{n}\right\}_{n-1}^{\infty}\) be a sequence of real numbers such that \(x_{1}>0\) and \begin{align*} x_{n+1}=1-\left(2+x_{n}\right)^{-1}=\frac{1+x_{n}}{2+x_{n}} \text {. } \end{align*} Prove that the sequence \(\left\{x_{n}\right\}\) converges, and find its limit.
If a limit \(L\) exists, we have \(x_n\to L\) for all \(n\), so \begin{align*} L = {1+L\over 2+L} \implies L^2 + L - 1 = 0 \implies L = -{1\over 2}\qty{-1 \pm \sqrt 5} .\end{align*} Noting that \(\sqrt{5} > 1\), the condition \(x_1>0\) and a small induction noting that if \(x_n>0\) then \({1+x_n \over 2+x_n}>0\), the only solution can be \(L = -1 + \sqrt 5\). To see that this does converge, write \(f(z) = 1 - (2+z)^{-1}\) so that \(x_{n+1} = f(x_n)\). The claim is that \(f\) is a contracting map on a metric space, which implies it has a unique fixed point \(z_0\) by the Banach fixed point theorem, and if \(f(z_0) = z_0\) then \(z_0 = L\). This follows from the mean value theorem, since \begin{align*} {\left\lvert {f(z) - f(w)} \right\rvert} = {\left\lvert {f'(\xi)} \right\rvert}{\left\lvert {z-w} \right\rvert} < {\left\lvert {z-w} \right\rvert} && \text{for some } \xi \in (z, w) .\end{align*} Since \(f'(z) = (2+z)^{-2}\) satisfies \(0 < f'(z) < 1\) for all \(z\), we have \begin{align*} {\left\lvert {f(z) - f(w)} \right\rvert} \leq {\left\lvert {z-w} \right\rvert} .\end{align*}
Fall 2020.1 #real_analysis/qual/completed
Show that if \(x_n\) is a decreasing sequence of positive real numbers such that \(\sum_{n=1}^\infty x_n\) converges, then \begin{align*} \lim_{n\to\infty} n x_n = 0. \end{align*}
See this MSE post for many solutions: https://math.stackexchange.com/questions/4603/if-a-n-subset0-infty-is-non-increasing-and-sum-a-n-infty-then-lim Note that the “obvious” thing here is fiddly: there are bounds on the slices \begin{align*} (N-M \pm 1) x_N \leq \sum_{M\leq k \leq N} a_k \leq (N-M\pm 1) x_M ,\end{align*} but arranging it so that the constants match the indices in \((N-M \pm 1)x_N \approx Nx_N\) requires something clever.
Fix \({\varepsilon}>0\), we’ll find \(n\gg 1\) so that \(nx_n < {\varepsilon}\). Find \(n, m\) with \(n>m\) large enough so that \begin{align*} {\varepsilon}> \sum_{m+1\leq k \leq n} x_k \geq \sum_{m+1\leq k \leq n}x_n = (m-n)x_n .\end{align*} Then rearrange: \begin{align*} {\varepsilon}> (m-n)x_n \implies nx_n < {\varepsilon}+ mx_n .\end{align*} Now choose \(n\) large enough so that \(x_n < {\varepsilon}\), which holds since \(\sum x_n < \infty\), to obtain \begin{align*} nx_n < {\varepsilon}+ m{\varepsilon}= {\varepsilon}(1+m) \to 0 .\end{align*}
Spring 2020.1 #real_analysis/qual/completed
Prove that if \(f: [0, 1] \to {\mathbb{R}}\) is continuous then \begin{align*} \lim_{k\to\infty} \int_0^1 kx^{k-1} f(x) \,dx = f(1) .\end{align*}
- DCT
-
Weierstrass Approximation Theorem
- If \(f: [a, b] \to {\mathbb{R}}\) is continuous, then for every \({\varepsilon}>0\) there exists a polynomial \(p_{\varepsilon}(x)\) such that \({\left\lVert {f - p_{\varepsilon}} \right\rVert}_\infty < {\varepsilon}\).
-
Suppose \(p\) is a polynomial, then integrate by parts: \begin{align*} \lim_{k\to\infty} \int_0^1 kx^{k-1} p(x) \, dx &= \lim_{k\to\infty} \int_0^1 \qty{ {\frac{\partial }{\partial x}\,}x^k } p(x) \, dx \\ &= \lim_{k\to\infty} \left[ x^k p(x) \Big|_0^1 - \int_0^1 x^k \qty{{\frac{\partial p}{\partial x}\,}(x) } \, dx \right] \quad\text{IBP}\\ &= p(1) - \lim_{k\to\infty} \int_0^1 x^k \qty{{\frac{\partial p}{\partial x}\,}(x) } \, dx ,\end{align*}
-
Thus it suffices to show that \begin{align*} \lim_{k\to\infty} \int_0^1 x^k \qty{{\frac{\partial p}{\partial x}\,} (x) } \, dx = 0 .\end{align*}
-
Integrating by parts a second time yields \begin{align*} \lim_{k\to\infty} \int_0^1 x^k \qty{{\frac{\partial p}{\partial x}\,}(x) } \, dx &= \lim_{k\to\infty} {x^{k+1} \over k+1} {\frac{\partial p}{\partial x}\,}(x) \Big|_0^1 - \int_0^1 {x^{k+1} \over k+1} \qty{ {\frac{\partial ^2 p}{\partial x^2}\,}(x)} \, dx \\ &= \lim_{k\to\infty} {p'(1) \over k+1} - \lim_{k\to\infty} \int_0^1 {x^{k+1} \over k+1} \qty{ {\frac{\partial ^2p}{\partial x^2}\,}(x)} \, dx \\ &= - \lim_{k\to\infty} \int_0^1 {x^{k+1} \over k+1} \qty{ {\frac{\partial ^2p}{\partial x^2}\,}(x)} \, dx \\ &= - \int_0^1 \lim_{k\to\infty} {x^{k+1} \over k+1} \qty{ {\frac{\partial ^2p}{\partial x^2}\,}(x)} \, dx \quad\text{by DCT} \\ &= - \int_0^1 0 \qty{ {\frac{\partial ^2p}{\partial x^2}\,}(x)} \, dx \\ &= 0 .\end{align*}
- The DCT can be applied here because polynomials are smooth and \([0, 1]\) is compact, so \({\frac{\partial ^2 p}{\partial x^2}\,}\) is bounded on \([0, 1]\) by some constant \(M\) and \begin{align*} \int_0^1 {\left\lvert {x^k {\frac{\partial ^2 p}{\partial x^2}\,} (x)} \right\rvert} \leq \int_0^1 1\cdot M = M < \infty.\end{align*}
-
So the result holds when \(f\) is a polynomial.
-
Now use the Weierstrass approximation theorem:
- If \(f: [a, b] \to {\mathbb{R}}\) is continuous, then for every \({\varepsilon}>0\) there exists a polynomial \(p_{\varepsilon}(x)\) such that \({\left\lVert {f - p_{\varepsilon}} \right\rVert}_\infty < {\varepsilon}\).
-
Thus \begin{align*} {\left\lvert { \int_0^1 kx^{k-1} p_{\varepsilon}(x)\,dx - \int_0^1 kx^{k-1}f(x)\,dx } \right\rvert} &= {\left\lvert { \int_0^1 kx^{k-1} \qty{p_{\varepsilon}(x) - f(x)} \,dx } \right\rvert} \\ &\leq {\left\lvert { \int_0^1 kx^{k-1} {\left\lVert {p_{\varepsilon}-f} \right\rVert}_\infty \,dx } \right\rvert} \\ &= {\left\lVert {p_{\varepsilon}-f} \right\rVert}_\infty \cdot {\left\lvert { \int_0^1 kx^{k-1} \,dx } \right\rvert} \\ &= {\left\lVert {p_{\varepsilon}-f} \right\rVert}_\infty \cdot x^k \Big|_0^1 \\ &= {\left\lVert {p_{\varepsilon}-f} \right\rVert}_\infty \\ \\ &\overset{{\varepsilon}\to 0}\to 0 \end{align*}
and the integrals are equal.
-
By the first argument, \begin{align*}\int_0^1 kx^{k-1} p_{\varepsilon}(x) \,dx = p_{\varepsilon}(1) \text{ for each } {\varepsilon}\end{align*}
-
Since uniform convergence implies pointwise convergence, \(p_{\varepsilon}(1) \overset{{\varepsilon}\to 0}\to f(1)\).
Fall 2019.1 #real_analysis/qual/completed
Let \(\{a_n\}_{n=1}^\infty\) be a sequence of real numbers.
-
Prove that if \(\displaystyle\lim_{n\to \infty } a_n = 0\), then \begin{align*} \lim _{n \rightarrow \infty} \frac{a_{1}+\cdots+a_{n}}{n}=0 \end{align*}
-
Prove that if \(\displaystyle\sum_{n=1}^{\infty} \frac{a_{n}}{n}\) converges, then \begin{align*} \lim _{n \rightarrow \infty} \frac{a_{1}+\cdots+a_{n}}{n}=0 \end{align*}
- Cesaro mean/summation.
- Break series apart into pieces that can be handled separately.
-
Idea: once \(N\) is large enough, \(a_k \approx S\), and all smaller terms will die off as \(N\to \infty\).
- See this MSE answer.
-
Prove a stronger result: \begin{align*} a_k \to S \implies S_N\coloneqq\frac 1 N \sum_{k=1}^N a_k \to S .\end{align*}
-
For any \({\varepsilon}> 0\), use convergence \(a_k \to S\): choose (and fix) \(M = M({\varepsilon})\) large enough such that \begin{align*} k\geq M+1 \implies {\left\lvert {a_k - S} \right\rvert} < \varepsilon .\end{align*}
-
With \(M\) fixed, choose \(N = N(M, {\varepsilon})\) large enough so that \({1\over N} \sum_{k=1}^{M} {\left\lvert {a_k - S} \right\rvert} < {\varepsilon}\).
-
Then \begin{align*} \left|\left(\frac{1}{N} \sum_{k=1}^{N} a_{k}\right)-S\right| &= {1\over N} {\left\lvert { \qty{\sum_{k=1}^N a_k} - NS } \right\rvert} \\ &= {1\over N} {\left\lvert { \qty{\sum_{k=1}^N a_k} - \sum_{k=1}^N S } \right\rvert} \\ &=\frac{1}{N}\left|\sum_{k=1}^{N}\left(a_{k}-S\right)\right| \\ &\leq \frac{1}{N} \sum_{k=1}^{N}\left|a_{k}-S\right| \\ &= {1\over N} \sum_{k=1}^{M} {\left\lvert {a_k - S} \right\rvert} + \sum_{k=M+1}^N {\left\lvert {a_k - S} \right\rvert} \\ &\leq {1\over N} \sum_{k=1}^{M} {\left\lvert {a_k - S} \right\rvert} + \sum_{k=M+1}^N {{\varepsilon}} \quad \text{since } a_k \to S\\ &= {1\over N} \sum_{k=1}^{M} {\left\lvert {a_k - S} \right\rvert} + (N - M){{\varepsilon}} \\ &\leq {\varepsilon}+ (N(M, {\varepsilon}) - M({\varepsilon})){\varepsilon} .\end{align*}
\todo[inline]{Revisit, not so clear that the last line can be made smaller than ${\varepsilon}$, since $M, N$ both depend on ${\varepsilon}$...}
-
Define \begin{align*} \Gamma_n \coloneqq\sum_{k=n}^\infty \frac{a_k}{k} .\end{align*}
-
\(\Gamma_1 = \sum_{k=1}^n \frac{ a_k } k\) is the original series and each \(\Gamma_n\) is a tail of \(\Gamma_1\), so by assumption \(\Gamma_n \overset{n\to\infty}\to 0\).
-
Compute \begin{align*} \frac 1 n \sum_{k=1}^n a_k &= \frac 1 n (\Gamma_1 + \Gamma_2 + \cdots + \Gamma_{n} \mathbf{- \Gamma_{n+1}}) \\ .\end{align*}
-
This comes from consider the following summation:
-
Use part (a): since \(\Gamma_n \overset{n\to\infty}\to 0\), we have \({1\over n} \sum_{k=1}^n \Gamma_k \overset{n\to\infty}\to 0\).
-
Also a minor check: \(\Gamma_n \to 0 \implies {1\over n}\Gamma_n \to 0\).
-
Then \begin{align*} \frac 1 n \sum_{k=1}^n a_k &= \frac 1 n (\Gamma_1 + \Gamma_2 + \cdots + \Gamma_{n} \mathbf{- \Gamma_{n+1}}) \\ &= \qty{ {1\over n } \sum_{k=0}^n \Gamma_k } - \qty{{1\over n}\Gamma_{n+1} } \\ &\overset{n\to\infty}\to 0 .\end{align*}
Fall 2018.4 #real_analysis/qual/completed
Let \(f\in L^1([0, 1])\). Prove that \begin{align*} \lim_{n \to \infty} \int_{0}^{1} f(x) {\left\lvert {\sin n x} \right\rvert} ~d x= \frac{2}{\pi} \int_{0}^{1} f(x) ~d x \end{align*}
Hint: Begin with the case that \(f\) is the characteristic function of an interval.
\todo[inline]{Ask someone to check the last approximation part.}
- Converting floor/ceiling functions to inequalities: \(x-1 \leq {\left\lfloor x \right\rfloor} \leq x\).
Case of a characteristic function of an interval \([a, b]\):
-
First suppose \(f(x) = \chi_{[a, b]}(x)\).
-
Note that \(\sin(nx)\) has a period of \(2\pi/n\), and thus \({\left\lfloor (b-a) \over (2\pi / n) \right\rfloor} = {\left\lfloor n(b-a)\over 2\pi \right\rfloor}\) full periods in \([a, b]\).
-
Taking the absolute value yields a new function with half the period
- So \({\left\lvert {\sin(nx)} \right\rvert}\) has a period of \(\pi/n\) with \({\left\lfloor n(b-a) \over \pi \right\rfloor}\) full periods in \([a, b]\).
-
We can compute the integral over one full period (which is independent of which period is chosen)
- We can use translation invariance of the integral to compute this over the period \(0\) to \(\pi/n\).
- Since \(\sin(nx)\) is positive, it equals \({\left\lvert {\sin(nx)} \right\rvert}\) on its first period, so we have \begin{align*} \int_{\text{One Period}} {\left\lvert {\sin(nx)} \right\rvert} \, dx &= \int_0^{\pi/n} \sin(nx)\,dx \\ &= {1\over n} \int_0^\pi \sin(u) \,du \quad u = nx \\ &= {1\over n} \qty{-\cos(u)\mathrel{\Big|}_0^\pi} \\ &= {2 \over n} .\end{align*}
-
Then break the integral up into integrals over full periods \(P_1, P_2, \cdots, P_N\) where \(N \coloneqq{\left\lfloor n(b-a)/\pi \right\rfloor}\)
-
Noting that each period is of length \(\pi\over n\), so letting \(L_n\) be the regions falling outside of a full period, we have
-
Thus \begin{align*} \int_a^b {\left\lvert {\sin(nx)} \right\rvert} \, dx &= \qty{ \sum_{j=1}^{N} \int_{P_j} {\left\lvert {\sin(nx)} \right\rvert} \, dx } + \int_{L_n} {\left\lvert {\sin(nx)} \right\rvert}\,dx \\ &= \qty{ \sum_{j=1}^{N} {2\over n} } + \int_{L_n} {\left\lvert {\sin(nx)} \right\rvert}\,dx \\ &= N \qty{2\over n} + \int_{L_n} {\left\lvert {\sin(nx)} \right\rvert}\,dx \\ &\coloneqq{\left\lfloor (b-a) n \over \pi \right\rfloor} {2\over n} + R_n \\ &\coloneqq(b-a)C_n + R_n \end{align*} where (claim) \(C_n \overset{n\to\infty}\to {2\over \pi}\) and \(R(n) \overset{n\to\infty}\to 0\).
-
\(C_n \to {2\over \pi}\): \begin{align*} {n-1 \over n} \qty{2\over \pi} = {n-1 \over \pi} \qty{2\over n} \leq {\left\lfloor n\over \pi \right\rfloor}\qty{2\over n} \leq {n \over \pi}\qty{2\over n} = {2 \over \pi} ,\end{align*} then use the fact that \({n-1 \over n} \to 1\).
- Then equality follows by the Squeeze theorem.
-
\(R_n \to 0\):
- We use the fact that \(m(L_n) \to 0\), then \(\int_{L_n} {\left\lvert {\sin(nx)} \right\rvert} \leq \int_{L_n} 1 = m(L_n) \to 0\).
- This follows from the fact that \(L_n\) is the complement of \(\cup_j P_j\), the set of full periods, so \begin{align*} m(L_n) &= m(b-a) - \sum m(P_j) \\ &= \qty{b-a} - {\left\lfloor n(b-a) \over \pi \right\rfloor}\qty{\pi \over n} \\ &\overset{n\to \infty}\to (b-a) - (b-a) \\ &= 0 .\end{align*} where we’ve used the fact that \begin{align*} \qty{\pi \over n} \qty{(b-a)n-1 \over \pi} &\leq {\left\lfloor n(b-a) \over \pi \right\rfloor}\qty{\pi \over n} \\ &\leq \qty{\pi \over n} \qty{(b-a)n\over \pi} \\ &= (b-a) ,\end{align*} then taking \(n\to \infty\) sends the LHS to \(b-a\), forcing the middle term to be \(b-a\) by the Squeeze theorem.
General case:
-
By linearity of the integral, the result holds for simple functions:
- If \(f = \sum c_j \chi_{E_j}\) where \(E_j = [a_j, b_j]\), we have \begin{align*} \int_0^1 f(x) {\left\lvert {\sin(nx)} \right\rvert}\,dx &= \int_0^1 \sum c_j \chi_{E_j}(x) {\left\lvert {\sin(nx)} \right\rvert}\,dx \\ &= \sum c_j \int_0^1 \chi_{E_j}(x) {\left\lvert {\sin(nx)} \right\rvert}\,dx \\ &= \sum c_j (b_j - a_j) {2\over \pi} \\ &= {2\over \pi} \sum c_j (b_j - a_j) \\ &= {2\over \pi} \sum c_j m(E_j) \\ &\coloneqq{2\over \pi} \int_0^1 f .\end{align*}
-
Since \(f\in L^1\), where simple functions are dense, choose \(s_n\nearrow f\) where \({\left\lVert {s_N - f} \right\rVert}_1 < {\varepsilon}\), then \begin{align*} {\left\lvert { \int_0^1 f(x) {\left\lvert {\sin(nx)} \right\rvert} \,dx - \int_0^1 s_N(x) {\left\lvert {\sin(nx)} \right\rvert}\,dx } \right\rvert} &= {\left\lvert { \int_0^1 \qty{f(x) - s_N(x)} {\left\lvert {\sin(nx)} \right\rvert} \,dx } \right\rvert} \\ &\leq \int_0^1 {\left\lvert { f(x) - s_N(x)} \right\rvert} {\left\lvert {\sin(nx)} \right\rvert} \,dx \\ &= {\left\lVert { \qty{f - s_N} {\left\lvert {\sin(nx)} \right\rvert} } \right\rVert}_1 \\ &\leq {\left\lVert {f-s_N} \right\rVert}_1 \cdot {\left\lVert {{\left\lvert {\sin(nx)} \right\rvert}} \right\rVert}_\infty \quad\text{by Holder}\\ &\leq {\varepsilon}\cdot 1 ,\end{align*}
-
So the integrals involving \(s_N\) converge to the integral involving \(f\), and \begin{align*} \lim_{n\to\infty} \int f(x){\left\lvert {\sin(nx)} \right\rvert} &= \lim_{n\to\infty} \lim_{N\to\infty} \int s_N(x) {\left\lvert {\sin(nx)} \right\rvert} \\ &= \lim_{N\to\infty} \lim_{n\to\infty} \int s_N(x) {\left\lvert {\sin(nx)} \right\rvert} \quad\text{because ?}\\ &= \lim_{N\to \infty} {2\over \pi} \int s_N(x) \\ &= {2\over \pi} \int f ,\end{align*} which is the desired result.
Fall 2017.4 #real_analysis/qual/completed
Let \begin{align*} f_{n}(x) = n x(1-x)^{n}, \quad n \in {\mathbb{N}}. \end{align*}
-
Show that \(f_n \to 0\) pointwise but not uniformly on \([0, 1]\).
-
Show that \begin{align*} \lim _{n \to \infty} \int _{0}^{1} n(1-x)^{n} \sin x \, dx = 0 \end{align*}
Hint for (a): Consider the maximum of \(f_n\).
- \(\sum f_n < \infty \iff \sup f_n \to 0\).
- Negating uniform convergence: \(f_n\not\to f\) uniformly iff \(\exists {\varepsilon}\) such that \(\forall N({\varepsilon})\) there exists an \(x_N\) such that \({\left\lvert {f(x_N) - f(x)} \right\rvert} > {\varepsilon}\).
- Exponential inequality: \(1+y \leq e^y\) for all \(y\in {\mathbb{R}}\).
\(f_n\to 0\) pointwise:
- Finding the maximum: can check that \({\frac{\partial f_n}{\partial x}\,} = x(1-x)^{n-1} \qty{1 + (n^2-1)x}\)
- This has critical points \(x=0, 1, {-1 \over n^2 + 1}\), and the latter is a global max on \([0, 1]\).
- Set \(x_n \coloneqq{-1 \over n^2 + 1}\)
- Compute \begin{align*} \lim f_n(x_n) = \lim_{n\to \infty } {-n \over n^2 + 1} \qty{1 + x_n}^n = 0\cdot 1 = 0 .\end{align*}
- So \(\sup f_n \to 0\), forcing \(f_n \to 0\) pointwise.
The convergence is not uniform:
-
Let \(x_n = \frac 1 n\) and \(\varepsilon > e^{-1}\), then \begin{align*} {\left\lVert {nx(1-x)^n - 0} \right\rVert}_\infty &\geq {\left\lvert {nx_n (1-x_n)^n} \right\rvert} \\ &= {\left\lvert {\left( 1 - \frac 1 n\right)^n} \right\rvert} \\ &> e^{-1} \\ &> \varepsilon .\end{align*}
- Here we’ve used that \((1 + {x\over n})^n \leq e^x\) for all \(x\in {\mathbb{R}}\) and all \(n\).
- Follows from \(1+y \leq e^y\) applied to \(y = x/n\).
-
Thus \({\left\lVert {f_n - 0} \right\rVert}_\infty = {\left\lVert {f_n} \right\rVert}_\infty > e^{-1} > 0\).
- ?
\todo[inline]{Possible to use part a with $\sin(x) \leq x$ on $[0, \pi/2]$?}
- Noting that \(\sin(x) \leq 1\), we have \begin{align*} {\left\lvert {\int_0^1 n(1-x)^{n} \sin(x)} \right\rvert} &\leq \int_0^1 {\left\lvert {n(1-x)^n \sin(x)} \right\rvert} \\ &\leq \int_0^1 {\left\lvert {n (1-x)^n} \right\rvert} \\ &= n\int_0^1 (1-x)^n \\ &= -\frac{n(1-x)^{n+1}}{n+1} \\ &\overset{n\to\infty}\longrightarrow 0 .\end{align*}
Spring 2017.3 #real_analysis/qual/work
Let \begin{align*} f_{n}(x) = a e^{-n a x} - b e^{-n b x} \quad \text{ where } 0 < a < b. \end{align*}
Show that
- \(\sum_{n=1}^{\infty} \left|f_{n}\right|\) is not in \(L^{1}([0, \infty), m)\)
Hint: \(f_n(x)\) has a root \(x_n\).
-
\begin{align*}
\sum_{n=1}^{\infty} f_{n} \text { is in } L^{1}([0, \infty), m)
{\quad \operatorname{and} \quad}
\int _{0}^{\infty} \sum _{n=1}^{\infty} f_{n}(x) \,dm = \ln \frac{b}{a}
\end{align*}
\todo[inline]{Not complete.}
\todo[inline]{Add concepts.}
\todo[inline]{Walk through.}
-
\(f_n\) has a root: \begin{align*} ae^{-nax} = be^{-nbx} &\iff {1\over n} = e^{-nbx} e^{nax} = e^{n(b-a)x} \iff x = {\ln\qty{a\over b} \over n(a-b)} \coloneqq x_n .\end{align*}
-
Thus \(f_n\) only changes sign at \(x_n\), and is strictly positive on one side of \(x_n\).
-
Then \begin{align*} \int_{\mathbb{R}}\sum_n {\left\lvert {f_n(x)} \right\rvert}\,dx &= \sum_n \int_{\mathbb{R}}{\left\lvert {f_n(x)} \right\rvert} \,dx \\ &\geq \sum_n \int_{x_n}^\infty f_n(x) \, dx \\ &= \sum_n {1\over n} \qty{ e^{-bnx} - e^{-anx}\Big|_{x_n}^\infty } \\ &= \sum_n {1\over n} \qty{ e^{-bnx_n} - e^{-anx_n} } .\end{align*}
?
Fall 2016.1 #real_analysis/qual/completed
Define \begin{align*} f(x) = \sum_{n=1}^{\infty} \frac{1}{n^{x}}. \end{align*} Show that \(f\) converges to a differentiable function on \((1, \infty)\) and that \begin{align*} f'(x) =\sum_{n=1}^{\infty}\left(\frac{1}{n^{x}}\right)^{\prime}. \end{align*}
Hint: \begin{align*} \left(\frac{1}{n^{x}}\right)' = -\frac{1}{n^{x}} \ln n \end{align*}
\todo[inline]{Add concepts.}
- ?
-
Set \(f_N(x) \coloneqq\sum_{n=1}^N n^{-x}\), so \(f(x) = \lim_{N\to\infty} f_N(x)\).
-
If an interchange of limits is justified, we have \begin{align*} {\frac{\partial }{\partial x}\,} \lim_{N\to\infty} \sum_{n=1}^N n^{-x} &= \lim_{h\to 0} \lim_{N\to\infty} {1\over h} \left[ \qty{\sum_{n=1}^N n^{-x}} - \qty{\sum_{n=1}^N n^{-(x+h)} }\right] \\ &\mathop{\mathrm{=}}_{?} \lim_{N\to\infty} \lim_{h\to 0} {1\over h} \left[ \qty{\sum_{n=1}^N n^{-x}} - \qty{\sum_{n=1}^N n^{-(x+h)} }\right] \\ &= \lim_{N\to\infty} \lim_{h\to 0} {1\over h} \left[ {\sum_{n=1}^N n^{-x}} - {n^{-(x+h)} }\right] \quad\text{(1)} \\ &= \lim_{N\to\infty} \sum_{n=1}^N \lim_{h\to 0} {1\over h} \left[ n^{-x} - n^{-(x+h)} \right] \quad\text{since this is a finite sum} \\ &\coloneqq\lim_{N\to\infty} \sum_{n=1}^N {\frac{\partial }{\partial x}\,}\qty{1 \over n^x} \\ &= \lim_{N\to\infty} \sum_{n=1}^N -{\ln(n) \over n^x} ,\end{align*} where the combining of sums in (1) is valid because \(\sum n^{-x}\) is absolutely convergent for \(x>1\) by the \(p{\hbox{-}}\)test.
-
Thus it suffices to justify the interchange of limits and show that the last sum converges on \((1, \infty)\).
-
Claim: \(\sum n^{-x}\ln(n)\) converges.
-
Use the fact that for any fixed \({\varepsilon}>0\), \begin{align*} \lim_{n\to\infty} {\ln(n) \over n^{\varepsilon}} \mathop{\mathrm{=}}^{L.H.} \lim_{n\to\infty}{1/n \over {\varepsilon}n^{{\varepsilon}-1}} = \lim_{n\to\infty} {1\over {\varepsilon}n^{\varepsilon}} = 0 ,\end{align*}
-
This implies that for a fixed \({\varepsilon}>0\) and for any constant \(c>0\) there exists an \(N\) large enough such that \(n\geq N\) implies \(\ln(n)/n^{\varepsilon}< c\), i.e. \(\ln(n) < c n^{{\varepsilon}}\).
-
Taking \(c=1\), we have \(n\geq N \implies \ln(n) < n^{\varepsilon}\)
-
We thus break up the sum: \begin{align*} \sum_{n\in {\mathbb{N}}} {\ln(n) \over n^x} &= \sum_{n=1}^{N-1} { \ln(n) \over n^x} + \sum_{n=N}^\infty {\ln(n) \over n^x} \\ &\leq \sum_{n=1}^{N-1} { \ln(n) \over n^x} + \sum_{n=N}^\infty {n^{\varepsilon}\over n^x} \\ &\coloneqq C_{\varepsilon}+ \sum_{n=N}^\infty {n^{\varepsilon}\over n^x} \quad \text{with $C_{\varepsilon}<\infty$ a constant}\\ &= C_{\varepsilon}+ \sum_{n=N}^\infty {1 \over n^{x-{\varepsilon}}} ,\end{align*} where the last term converges by the \(p{\hbox{-}}\)test if \(x-{\varepsilon}> 1\).
-
But \({\varepsilon}\) can depend on \(x\), and if \(x\in (1, \infty)\) is fixed we can choose \({\varepsilon}< {\left\lvert {x-1} \right\rvert}\) to ensure this.
-
-
Claim: the interchange of limits is justified.
\todo[inline]{?}
Fall 2016.5 #real_analysis/qual/completed
Let \(\phi\in L^\infty({\mathbb{R}})\). Show that the following limit exists and satisfies the equality
\begin{align*}
\lim _{n \to \infty} \left(\int _{\mathbb{R}} \frac{|\phi(x)|^{n}}{1+x^{2}} \, dx \right) ^ {\frac{1}{n}}
= {\left\lVert {\phi} \right\rVert}_\infty.
\end{align*}
\todo[inline]{Add concepts.}
- ?
Let \(L\) be the LHS and \(R\) be the RHS.
Claim: \(L\leq R\).
- Since \({\left\lvert {\phi } \right\rvert}\leq {\left\lVert {\phi} \right\rVert}_\infty\) a.e., we can write `
Claim: \(R\leq L\).
- We will show that \(R\leq L + {\varepsilon}\) for every \({\varepsilon}>0\).
- Set \begin{align*} S_{\varepsilon}\coloneqq\left\{{x\in {\mathbb{R}}^n{~\mathrel{\Big\vert}~}{\left\lvert {\phi(x)} \right\rvert} \geq {\left\lVert {\phi} \right\rVert}_\infty - {\varepsilon}}\right\} .\end{align*}
- Then we have \begin{align*} \int_{\mathbb{R}}{{\left\lvert {\phi(x)} \right\rvert}^n \over 1 +x^2}\,dx &\geq \int_{S_{\varepsilon}} {{\left\lvert {\phi(x)} \right\rvert}^n \over 1 +x^2}\,dx \quad S_{\varepsilon}\subset {\mathbb{R}}\\ &\geq \int_{S_{\varepsilon}} { \qty{{\left\lVert {\phi} \right\rVert}_\infty - {\varepsilon}}^n \over 1 +x^2}\,dx \qquad\text{by definition of }S_{\varepsilon}\\ &= \qty{{\left\lVert {\phi} \right\rVert}_\infty - {\varepsilon}}^n \int_{S_{\varepsilon}} { 1 \over 1 +x^2}\,dx \\ &= \qty{{\left\lVert {\phi} \right\rVert}_\infty - {\varepsilon}}^n C_{\varepsilon}\qquad\text{where $C_{\varepsilon}$ is some constant} \\ \\ \implies \qty{ \int_{\mathbb{R}}{{\left\lvert {\phi(x)} \right\rvert}^n \over 1 +x^2}\,dx }^{1\over n} &\geq \qty{{\left\lVert {\phi} \right\rVert}_\infty - {\varepsilon}} C_{\varepsilon}^{1 \over n} \\ &\overset{n\to\infty}\to \qty{{\left\lVert {\phi} \right\rVert}_\infty - {\varepsilon}} \cdot 1 \\ &\overset{{\varepsilon}\to 0}\to {\left\lVert {\phi} \right\rVert}_\infty ,\end{align*} where we’ve again used the fact that \(c^{1\over n} \to 1\) for any constant.
Fall 2016.6 #real_analysis/qual/completed
Let \(f, g \in L^2({\mathbb{R}})\). Show that \begin{align*} \lim _{n \to \infty} \int _{{\mathbb{R}}} f(x) g(x+n) \,dx = 0 \end{align*}
\todo[inline]{Rewrite solution.}
- Cauchy Schwarz: \({\left\lVert {fg} \right\rVert}_1 \leq {\left\lVert {f} \right\rVert}_1 {\left\lVert {g} \right\rVert}_1\).
- Small tails in \(L^p\).
-
Use the fact that \(L^p\) has small tails: if \(h\in L^2({\mathbb{R}})\), then for any \({\varepsilon}> 0\), \begin{align*} \forall {\varepsilon},\, \exists N\in {\mathbb{N}}{\quad \operatorname{such that} \quad}\int_{{\left\lvert {x} \right\rvert} \geq {N}} {\left\lvert {h(x)} \right\rvert}^2 \,dx < {\varepsilon} .\end{align*}
-
So choose \(N\) large enough so that \begin{align*} \int_{{\left\lVert {x} \right\rVert} \geq N}{\left\lvert {g(x)} \right\rvert}^2 < {\varepsilon}\\ \int_{{\left\lVert {x} \right\rVert} \geq N}{\left\lvert {f(x)} \right\rvert}^2 < {\varepsilon}\\ .\end{align*}
-
Then write \begin{align*} \int_{{\mathbb{R}}^d} f(x) g(x+n) \,dx = \int_{{\left\lVert {x} \right\rVert} \leq N} f(x)g(x+n)\,dx + \int_{{\left\lVert {x} \right\rVert} \geq N} f(x) g(x+n)\,dx .\end{align*}
-
Bounding the second term: apply Cauchy-Schwarz \begin{align*} \int_{{\left\lVert {x} \right\rVert} \geq N} f(x) g(x+n)\,dx \leq \qty{ \int_{{\left\lVert {x} \right\rVert} \geq N} {\left\lvert {f(x)} \right\rvert}^2}^{1\over 2} \cdot \qty{ \int_{{\left\lVert {x} \right\rVert} \geq N} {\left\lvert {g(x)} \right\rvert}^2}^{1\over 2} \leq {\varepsilon}^{1\over 2} \cdot {\left\lVert {g} \right\rVert}_2 .\end{align*}
-
Bounding the first term: also Cauchy-Schwarz, after variable changes \begin{align*} \int_{{\left\lVert {x} \right\rVert} \leq N} f(x) g(x+n)\,dx &= \int_{-N}^N f(x) g(x+n)\,dx \\ &= \int_{-N+n}^{N+n} f(x-n) g(x)\,dx \\ &\leq \int_{-N+n}^{\infty} f(x-n) g(x)\,dx \\ &\leq \qty{\int_{-N+n}^{\infty} {\left\lvert {f(x-n)} \right\rvert}^2}^{1\over 2}\cdot \qty{\int_{-N+n}^{\infty} {\left\lvert {g(x)} \right\rvert}^2}^{1\over 2} \\ &\leq {\left\lVert {f} \right\rVert}_2 \cdot {\varepsilon}^{1\over 2} .\end{align*}
-
Then as long as \(n\geq 2N\), we have \begin{align*} \int {\left\lvert {f(x) g(x+n)} \right\rvert} \leq \qty{{\left\lVert {f} \right\rVert}_2 + {\left\lVert {g} \right\rVert}_2} \cdot {\varepsilon}^{1\over 2} .\end{align*}
Spring 2016.1 #real_analysis/qual/work
For \(n\in {\mathbb{N}}\), define \begin{align*} e_{n} = \left (1+ {1\over n} \right)^{n} {\quad \operatorname{and} \quad} E_{n} = \left( 1+ {1\over n} \right)^{n+1} \end{align*}
Show that \(e_n < E_n\), and prove Bernoulli’s inequality: \begin{align*} (1+x)^n \geq 1+nx && -1 < x < \infty ,\,\, n\in {\mathbb{N}} .\end{align*}
Use this to show the following:
- The sequence \(e_n\) is increasing.
- The sequence \(E_n\) is decreasing.
- \(2 < e_n < E_n < 4\).
- \(\lim _{n \to \infty} e_{n} = \lim _{n \to \infty} E_{n}\).
Fall 2015.1 #real_analysis/qual/work
Define \begin{align*} f(x)=c_{0}+c_{1} x^{1}+c_{2} x^{2}+\ldots+c_{n} x^{n} \text { with } n \text { even and } c_{n}>0. \end{align*}
Show that there is a number \(x_m\) such that \(f(x_m) \leq f(x)\) for all \(x\in {\mathbb{R}}\).
Spring 2014.2 #real_analysis/qual/completed
Let \(\left\{{a_n}\right\}\) be a sequence of real numbers such that \begin{align*} \left\{{b_n}\right\} \in \ell^2({\mathbb{N}}) \implies \sum a_n b_n < \infty. \end{align*} Show that \(\sum a_n^2 < \infty\).
Note: Assume \(a_n, b_n\) are all non-negative.
\todo[inline]{Have someone check!}
-
Define a sequence of operators \begin{align*} T_N: \ell^2 &\to \ell^1\\ \left\{{b_n}\right\} &\mapsto \sum_{n=1}^N a_n b_n .\end{align*}
-
By assumption, these are well defined: the image is \(\ell^1\) since \({\left\lvert {T_N(\left\{{b_n}\right\})} \right\rvert} < \infty\) for all \(N\) and all \(\left\{{b_n}\right\} \in \ell^2\).
-
So each \(T_N \in \qty{\ell^2} {}^{ \vee }\) is a linear functional on \(\ell^2\).
-
For each \(x\in \ell^2\), we have \({\left\lVert {T_N(x)} \right\rVert}_{{\mathbb{R}}} = \sum_{n=1}^N a_n b_n < \infty\) by assumption, so each \(T_N\) is pointwise bounded.
-
By the Uniform Boundedness Principle, \(\sup_N {\left\lVert {T_N} \right\rVert}_{\text{op}} < \infty\).
-
Define \(T = \lim_{N \to\infty } T_N\), then \({\left\lVert {T} \right\rVert}_{\text{op}} < \infty\).
-
By the Riesz Representation theorem, \begin{align*} \sqrt{\sum a_n^2} \coloneqq{\left\lVert {\left\{{a_n}\right\}} \right\rVert}_{\ell^2} = {\left\lVert {T} \right\rVert}_{\qty{\ell^2} {}^{ \vee }} = {\left\lVert {T} \right\rVert}_{\text{op}} < \infty .\end{align*}
-
So \(\sum a_n^2 < \infty\).