-
-
Negate \(\forall x\in {\mathbf{R}},~\exists y\in {\mathbf{R}}{~\mathrel{\Big\vert}~}{\left\lvert {x-y} \right\rvert} \geq 2017\) \begin{align*}\exists x\in {\mathbf{R}}{~\mathrel{\Big\vert}~}\forall y\in {\mathbf{R}},~ {\left\lvert {x-y} \right\rvert} < 2017\end{align*}
-
Note that \(p\implies q \iff q \vee \neg p\), so we have \(\neg(p \implies q) \iff \neg(q \vee \neg p) \iff p ~\&~ \neg q\). \begin{align*} f: {\mathbf{R}}\to {\mathbf{R}}\text{ is continuous } \iff \\ \forall (x, y) \in {\mathbf{R}}^2, ~\forall \varepsilon,~\exists \delta {~\mathrel{\Big\vert}~}\quad d(x,y) < \delta \implies d(f(x), f(y)) < \varepsilon \iff \\ \forall (x, y) \in {\mathbf{R}}^2, ~\forall \varepsilon,~\exists \delta {~\mathrel{\Big\vert}~}\quad d(x,y) \geq \delta ~~\vee~~ d(f(x), f(y)) < \varepsilon , \end{align*} so \begin{align*} f: {\mathbf{R}}\to {\mathbf{R}}\text{ is not continuous } \iff \\ \exists (x,y) \in {\mathbf{R}}^2, \exists \varepsilon {~\mathrel{\Big\vert}~}\forall \delta, \quad d(x,y) < \delta ~\&~ d(f(x), f(y)) \geq \varepsilon. \hfill\blacksquare \end{align*}
-
-
\(V = \left\{{\mathbf{v} \in {\mathbf{R}}^3 {~\mathrel{\Big\vert}~}{\left\langle {\mathbf{v}},~{{\left[ {3,4,5} \right]}} \right\rangle} = \mathbf{0}}\right\}\)
-
Subspace test: \(V \subset X\) is a linear subspace iff \(\left\{{t\mathbf{v}_1 + \mathbf{v}_2 {~\mathrel{\Big\vert}~}t\in {\mathbf{R}}, \mathbf{v}_i \in V}\right\} \subseteq V\).
\begin{align*}
{\left\langle {t\mathbf{v}_1 + \mathbf{v}_2},~{{\left[ {3,4,5} \right]}} \right\rangle} = t{\left\langle {\mathbf{v}_1},~{{\left[ {3,4,5} \right]}} \right\rangle} + {\left\langle {\mathbf{v}_2},~{{\left[ {3,4,5} \right]}} \right\rangle} = t\mathbf{0} + \mathbf{0} = \mathbf{0}.\hfill\blacksquare
\end{align*}
- Alternatively, just note that it is the kernel of the linear map \({\left\langle {{-}},~{{\left[ {3,4,5} \right]}} \right\rangle}: {\mathbf{R}}^3 \to {\mathbf{R}}^1\), and kernels are always sub-things.
- Yes, note \(V\) defines a plane \(P \cong {\mathbf{R}}^2 \subset {\mathbf{R}}^3\), so a projection onto \(P^\perp = {\left[ {3,4,5} \right]}\) will work: \begin{align*} A = \left[ \begin{array}{ccc} 3 & 4 & 5 \\ 0 & 0 & 0 \\ 0 & 0 & 0\end{array}\right] \end{align*} Then \(A\mathbf{x} = {\left[ {3x + 4y + 5z, 0, 0} \right]}\) and if \(\mathbf{x} \in V\) then \(3x+4y+5z = 0\) by definition and thus \(A\mathbf{x} = \mathbf{0}\).
- Yes, first we look for a matrix that annihilates \({\left[ {3,4,5} \right]}\) and has rank 2, since its rows will span the 2-dimensional subspace \(V\). One that works is \begin{align*} A = \left[ \begin{array}{ccc} 2 & 1 & -2 \\ 0 & -5 & 4 \\ 0 & 0 & 0\end{array}\right] \end{align*} So now we know that \({\left[ {2,1,-2} \right]}, {\left[ {0,-5,4} \right]} \in V\), and since \(A\) is rank 2, they in fact span \(V\). Thus we can take \(A^T\), whose columns are these vectors. Then the columnspace of \(A^T\) is \(V\), and thus the linear map corresponding to \(A^T\) has image \(V\). \(\hfill\blacksquare\)
- No, by rank nullity: \({\left\lvert {\operatorname{im}A} \right\rvert} + {\left\lvert {\ker A} \right\rvert} = {\left\lvert {\mathrm{domain} A} \right\rvert}\), but \({\left\lvert {V} \right\rvert} = 2\), so this would force the contradiction \(2+2 = 3\).
-
Subspace test: \(V \subset X\) is a linear subspace iff \(\left\{{t\mathbf{v}_1 + \mathbf{v}_2 {~\mathrel{\Big\vert}~}t\in {\mathbf{R}}, \mathbf{v}_i \in V}\right\} \subseteq V\).
\begin{align*}
{\left\langle {t\mathbf{v}_1 + \mathbf{v}_2},~{{\left[ {3,4,5} \right]}} \right\rangle} = t{\left\langle {\mathbf{v}_1},~{{\left[ {3,4,5} \right]}} \right\rangle} + {\left\langle {\mathbf{v}_2},~{{\left[ {3,4,5} \right]}} \right\rangle} = t\mathbf{0} + \mathbf{0} = \mathbf{0}.\hfill\blacksquare
\end{align*}
-
Induct on \(n\), integrate by parts and use L’Hopital: Base case: \begin{align*}n=1 \implies \int_0^\infty xe^{-x} = -xe^{-x}\bigg\rvert_0^\infty - \int_0^\infty -e^{-x} = \lim_{x\to\infty} \frac x {e^x} + \lim_{x\to\infty} \frac 1 {e^x} + 1 \\ \overset{L.H.}{=} \lim_{x\to\infty} \frac 1 {e^x} + 0 + 1 = 1\end{align*}
Inductive step: \begin{align*} \int_0^\infty x^ne^{-x} = -x^ne^{-x}\bigg\rvert_0^\infty - \int_0^\infty -nx^{n-1}e^{-x} = -x^ne^{-x}\bigg\rvert_0^\infty + n \int_0^\infty x^{n-1}e^{-x} \\ \overset{I.H.}{=} -x^ne^{-x}\bigg\rvert_0^\infty + n (n-1)! \\ \overset{L.H.$\times n$}{=}0 + n(n-1)! = n!. \hfill\blacksquare \end{align*}
-
-
Since \(A\) is \(2\times 2\) and has 2 eigenvalues, noting that \(\deg \chi_A(x) = 2\), we have \(\chi_A(x) = (x-1)(x+1) = x^2 -1\). The minimal polynomial of \(A\) divides \(\chi_A(x)\), so we have \(\chi_A(A) = 0\) and thus \(A^2 - I_2 = 0 \implies A^2 = I_2\).
-
This will happen when \(x^2-1 = (x+1)(x-1)\) is not the minimal polynomial, and we can force the minimal polynomial to be degree 3 by inserting a nontrivial Jordan block to a diagonal matrix containing just the eigenvalues \(\pm 1\). An example that works: \begin{align*} \left(\begin{array}{rr|r} 1 & 1 & 0 \\ 0 & 1 & 0 \\ \hline 0 & 0 & -1 \end{array}\right),~ A^2 = \left(\begin{array}{rrr} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right), \quad \operatorname{Spec}(A) = [-1, 1, 1] \end{align*}
-
Every symmetric matrix with \(A\) real spectrum admits a real eigendecomposition \(\Lambda D \Lambda^T\), where \(D\) is diagonal with entries the eigenvalues of \(A\) and \(\Lambda\) are orthogonal (which are also invertible). Here, we only need the fact that \(A\) is diagonalizable by invertible matrices. In our case we have \([D^2]_{ii} = (\pm 1)^2 = 1\) so \(D^2 = I_n\). Thus we have \begin{align*}A^2 = (\Lambda D \Lambda^{-1})^2 = \Lambda D^2 \Lambda^{-1}= \Lambda I_n \Lambda^{-1}= I_n. \hfill\blacksquare\end{align*}
-
-
-
Definition of uniform convergence: \begin{align*} \left\{{f_n}\right\}_{i=1}^\infty\rightrightarrows f \text{ on } X \iff \forall \varepsilon > 0,~ \exists N(\varepsilon) {~\mathrel{\Big\vert}~}n\geq N(\varepsilon) \implies \forall x \in X,~ {\left\lvert {f_n(x) - f(x)} \right\rvert} < \varepsilon \end{align*}
-
Theorem (Riemann-Lebesgue): A bounded function on a compact set is integrable iff its set of discontinuities has measure zero.
-
Theorem: \begin{align*} f_n \rightrightarrows f \implies \lim_{n\to\infty} \int_X f_n(x) ~dx = \int_X \lim_{n\to\infty} f_n(x) ~dx = \int_X f(x) ~dx \end{align*}
-
Proof (Pugh 218):
-
We first show \(f\) is integrable. Fix \(f_n\); by the Riemann-Lebesgue theorem, since \(f_n\) is integrable, it is bounded and discontinuous only at finitely many points \(Z_n\), and thus bounded and continuous on \([a,b] - Z_n\) where \(\mu(Z_n) = 0.\)
Let \(Z = \cup_n Z_n\), which is a countable union of countable sets and thus countable, so \(\mu(Z) = 0\). A uniform limit of continuous functions is continuous, so \(\lim f_n = f\) is a continuous function on \(S = [a,b] - Z\).
Since \(f\) is a uniform limit of bounded functions, it is bounded, and since \(f\) is both bounded and continuous off of a null set, it is integrable.
-
We now show \(\int f_n \rightrightarrows \int f\). Let \(C_b(X, {\mathbf{R}}) = \left\{{f:X \to {\mathbf{R}}{~\mathrel{\Big\vert}~}f\text{ is bounded }}\right\}\), which is a complete normed space under the norm \({\left\lVert {f} \right\rVert}_\infty = \displaystyle\sup_{x\in X}\left\{{{\left\lvert {f(x)} \right\rvert}}\right\}\) which induces the metric \begin{align*}d(f,g) = {\left\lVert {f-g} \right\rVert}_\infty = \sup_{x\in X}\left\{{{\left\lvert {f(x) - g(x)} \right\rvert}}\right\}.\end{align*}
-
Now \(f_n \rightrightarrows f \iff {\left\lVert {f-f_n} \right\rVert}_\infty \to 0\), so we can thus compute \begin{align*}\begin{align*} {\left\lvert {\int_a^b f(x)~dx - \int_a^b f_n(x)~dx} \right\rvert} &= {\left\lvert {\int_a^b f(x) - f_n(x)~dx} \right\rvert} \\ &\leq \int_a^b {\left\lvert {f(x) - f_n(x)} \right\rvert}~dx \\ &\leq {\left\lVert {f-f_n} \right\rVert}_\infty {\left\lvert {b-a} \right\rvert} \to ~0 {\left\lvert {b-a} \right\rvert} = 0. \end{align*}\end{align*}
Applying this to \(f = 0\), we have \(f_n \rightrightarrows 0 \implies \int f_n \rightrightarrows \int 0 = 0\). \(\hfill\blacksquare\)
-
-
-
Let \(\delta = \min\left\{{\frac 1 2, \sqrt{\frac \varepsilon 2}}\right\}\), then \begin{align*}{\left\lvert {x-1} \right\rvert} < \frac 1 2 \implies \frac 1 2 < x < \frac 3 2 \implies {\left\lvert {x} \right\rvert} > \frac 1 2 \implies \frac 1 {{\left\lvert {x} \right\rvert}} < 2\end{align*} and so \begin{align*}\begin{align*} {\left\lvert {x-1} \right\rvert} < \delta \implies {\left\lvert {\frac{x^2+1}{x} - 2} \right\rvert} &= {\left\lvert {\frac{(x-1)^2}{x}} \right\rvert} \\ &= \frac{{\left\lvert {x-1} \right\rvert}^2}{{\left\lvert {x} \right\rvert}} \\ &< 2{{\left\lvert {x-1} \right\rvert}^2} \\ &< 2{\delta^2} \\ &= 2 \left(\frac{\varepsilon} 2\right) \\ &= \varepsilon. \hfill\blacksquare \end{align*}\end{align*}
-
Definitions:
-
\(f: {\mathbf{R}}^2 \to {\mathbf{R}}, \quad (x,y) \mapsto z = f(x,y)\)
-
Level curves are given by \(f(x, y) = c\);
-
\(\nabla f: {\mathbf{R}}^2 \to {\mathbf{R}}^2, \quad \nabla f(\mathbf{p}) = {\left[ {f_x(\mathbf{p}), f_y(\mathbf{p})} \right]}\)
-
\(\gamma(t): {\mathbf{R}}\to {\mathbf{R}}^2, \quad \gamma(t) = {\left[ {x(t), y(t)} \right]}, \quad \gamma_t(t) = {\left[ {x_t(t), y_t(t)} \right]}\)
-
\(g(t) = f \circ \gamma:{\mathbf{R}}\to {\mathbf{R}}, \quad (f \circ \gamma)(t) = f(x(t), y(t)), \quad g_t(t) = f_t(\gamma(t))\cdot \gamma_t(t)\)
-
A vector \(v\) is perpendicular to a surface at a point \(p\) iff \(v\) is perpendicular to the tangent vector of every curve passing through \(p\).
-
Proof of actual statement
-
Let \(\mathbf{p} = {\left[ {x_0, y_0} \right]}\) be a point on the level surface, so \(f(\mathbf{p}) = c\) for some constant.
-
Let \(\gamma(t) = {\left[ {x(t), y(t)} \right]}\) be a curve on the level surface, so \(\gamma(t_0) = \mathbf{p}\) for some \(t_0\). Let \(\gamma'(t)\) be its tangent vector.
-
Let \(g(t) = f(x(t), y(t)) = (f \circ \gamma)(t)\), and note that for some \(\mathbf{x}\) on the level surface, and so we have \(g(t_0) = f(\mathbf{x}) = c\) and thus \({\frac{\partial g}{\partial t}\,}(t) = 0\).
-
By the chain rule, compute \begin{align*}\begin{align*} {\frac{\partial g}{\partial t}\,}(t) &= \left({\frac{\partial f}{\partial x}\,}{\frac{\partial x}{\partial t}\,} + {\frac{\partial f}{\partial y}\,}{\frac{\partial y}{\partial t}\,}\right)(t) \\ &= {\left\langle { {\left[ {{\frac{\partial f}{\partial x}\,}(x(t), y(t)), {\frac{\partial f}{\partial y}\,}(x(t), y(t))} \right]} },~{ {\left[ {{\frac{\partial x}{\partial t}\,}(t), {\frac{\partial y}{\partial t}\,}(t)} \right]} } \right\rangle} \\ &= {\left\langle {\nabla f(\gamma(t))},~{{\frac{\partial \gamma}{\partial t}\,} (t)} \right\rangle}. \end{align*}\end{align*}
From above, we know that this is zero. Now note that we have \begin{align*} {\frac{\partial g}{\partial t}\,}(t_0) = {\left\langle {\nabla f(\gamma(t_0))},~{{\frac{\partial \gamma}{\partial t}\,}(t_0)} \right\rangle} = {\left\langle {\nabla f(\mathbf{p})},~{\gamma'(t_0)} \right\rangle} \end{align*} but by the previous statement, \({\frac{\partial g}{\partial t}\,}(t_0) = 0\), which exacty says that the gradient of \(f\) is orthogonal to \(\gamma\) at \(\mathbf{p}\). But \(\mathbf{p}\) was an arbitrary point on the level surface, and \(\gamma\) was an arbitrary curve through it. So \(\nabla f\) is orthogonal to every level curve through \(\mathbf{p}\), and this orthogonal to the tangent plane at \(\mathbf{p}\), and thus normal to the surface at \(\mathbf{p}\). Since \(\mathbf{p}\) was an arbitrary point on the level curve, this holds everywhere on the level curve, and for arbitrary level curves. Thus \(\nabla f\) is orthogonal to every level curve. \(\hfill\blacksquare\)
-
-
Parts:
- Let \(x, y \in X\) and suppose \(f(x) = f(y)\). By assumption, \(g(f(x)) = x\) and \(g(f(y)) = y\), and since we also have \(g(f(x)) = g(f(y))\) we have \(g(f(y)) = x\). But \(g(f(y)) = y\), so \(y=x\).
- Let \(y\in Y\), we will find an \(x\in X\) such that \(g(x) = y\). We can consider \(f(y)\), so let \(x = f(y)\). We have \(g(f(y)) = y\) by assumption, so \(g(x) = g(f(y)) = y\) as desired.
- We need to have \(f\) fail surjectivity and \(g\) fail injectivity, so take \(X = [1],~ Y = [2]\) where \begin{align*} f(1) = 1, \\ g(1) = 1, ~g(2) = 1 \end{align*} then \(g(f(1)) = 1\), and this exhausts \(X\). Since \({\left\lvert {X} \right\rvert} \neq {\left\lvert {Y} \right\rvert}\), these don’t form a bijection – in particular, \(2\not\in\operatorname{im}f \subsetneq Y\).