Fix typos introduces more
This commit is contained in:
Zheyuan Wu
2024-12-03 11:20:59 -06:00
parent cbed1333ed
commit 9283c6b427
21 changed files with 213 additions and 44 deletions

View File

@@ -6,7 +6,7 @@
- In general, we can design an algorithm to map instances of a new problem to instances of known solvable problem (e.g., max-flow) to solve this new problem! - In general, we can design an algorithm to map instances of a new problem to instances of known solvable problem (e.g., max-flow) to solve this new problem!
- Mapping from one problem to another which preserves solutions is called reduction. - Mapping from one problem to another which preserves solutions is called reduction.
## Reduction: Basic Idea ## Reduction: Basic Ideas
Convert solutions to the known problem to the solutions to the new problem Convert solutions to the known problem to the solutions to the new problem

View File

@@ -45,7 +45,7 @@ Assumption: No clause contains both a literal and its negation.
Need to: construct $S$ of positive numbers and a target $t$ Need to: construct $S$ of positive numbers and a target $t$
Idea of construction: Ideas of construction:
For 3-SAT instance $\Psi$: For 3-SAT instance $\Psi$:
@@ -276,7 +276,7 @@ Consider an instance of SSS: $\{ a_1,a_2,\cdots,a_n\}$ and sum $b$. We can creat
Then we prove that the scheduling instance is a "yes" instance if and only if the SSS instance is a "yes" instance. Then we prove that the scheduling instance is a "yes" instance if and only if the SSS instance is a "yes" instance.
Idea of proof: Ideas of proof:
If there is a subset of $\{a_1,a_2,\cdots,a_n\}$ that sums to $b$, then we can schedule the jobs in that order on one machine. If there is a subset of $\{a_1,a_2,\cdots,a_n\}$ that sums to $b$, then we can schedule the jobs in that order on one machine.

View File

@@ -38,7 +38,7 @@ Answer: The adversary can make the runtime of each operation $\Theta(n)$ by simp
We don't want the adversary to know the hash function based on just looking at the code. We don't want the adversary to know the hash function based on just looking at the code.
Idea: Randomize the choice of the hash function. Ideas: Randomize the choice of the hash function.
### Randomized Algorithm ### Randomized Algorithm
@@ -57,7 +57,7 @@ $$O(n)=E[T(n)]$$ or some other probabilistic quantity.
#### Randomization can help #### Randomization can help
Idea: Randomize the choice of hash function $h$ from a family of hash functions, $H$. Ideas: Randomize the choice of hash function $h$ from a family of hash functions, $H$.
If we randomly pick a hash function from this family, then the probability that the hash function is bad on **any particular** set $S$ is small. If we randomly pick a hash function from this family, then the probability that the hash function is bad on **any particular** set $S$ is small.

View File

@@ -82,7 +82,7 @@ The NBT(Next bit test) is complete.
If $\{X_n\}$ on $\{0,1\}^{l(n)}$ passes NBT, then it's pseudorandom. If $\{X_n\}$ on $\{0,1\}^{l(n)}$ passes NBT, then it's pseudorandom.
Idea of proof: full proof is on the text. Ideas of proof: full proof is on the text.
Our idea is that we want to create $H^{l(n)}_n=\{X_n\}$ and $H^0_n=\{U_{l(n)}\}$ Our idea is that we want to create $H^{l(n)}_n=\{X_n\}$ and $H^0_n=\{U_{l(n)}\}$
@@ -137,7 +137,7 @@ The other part of proof will be your homework, damn.
If one-way function exists, then Pseudorandom Generator exists. If one-way function exists, then Pseudorandom Generator exists.
Idea of proof: Ideas of proof:
Let $f:\{0,1\}^n\to \{0,1\}^n$ be a strong one-way permutation (bijection). Let $f:\{0,1\}^n\to \{0,1\}^n$ be a strong one-way permutation (bijection).

View File

@@ -16,7 +16,7 @@ $$
Pr[x\gets \{0,1\}^n;y=f(x);A(1^n,y)=h(x)]\leq \frac{1}{2}+\epsilon(n) Pr[x\gets \{0,1\}^n;y=f(x);A(1^n,y)=h(x)]\leq \frac{1}{2}+\epsilon(n)
$$ $$
Idea: $f:\{0,1\}^n\to \{0,1\}^*$ is a one-way function. Ideas: $f:\{0,1\}^n\to \{0,1\}^*$ is a one-way function.
Given $y=f(x)$, it is hard to recover $x$. A cannot produce all of $x$ but can know some bits of $x$. Given $y=f(x)$, it is hard to recover $x$. A cannot produce all of $x$ but can know some bits of $x$.
@@ -46,7 +46,7 @@ $\langle x,1^n\rangle=x_1+x_2+\cdots +x_n\mod 2$
$\langle x,0^{n-1}1\rangle=x_ n$ $\langle x,0^{n-1}1\rangle=x_ n$
Idea of proof: Ideas of proof:
If A could reliably find $\langle x,1^n\rangle$, with $r$ being completely random, then it could find $x$ too often. If A could reliably find $\langle x,1^n\rangle$, with $r$ being completely random, then it could find $x$ too often.

View File

@@ -123,7 +123,7 @@ $Enc_F(m):$ let $r\gets U_n$; output $(r,F(r)\oplus m)$.
$Dec_F(m):$ Given $(r,c)$, output $m=F(r)\oplus c$. $Dec_F(m):$ Given $(r,c)$, output $m=F(r)\oplus c$.
Idea: Adversary sees $r$ but has no idea about $F(r)$. (we choose all outputs at random) Ideas: Adversary sees $r$ but has no Ideas about $F(r)$. (we choose all outputs at random)
If we could do this, this is MMS (multi-message secure). If we could do this, this is MMS (multi-message secure).

View File

@@ -77,7 +77,7 @@ With $g^a,g^b$ no one can compute $g^{ab}$.
### Public key encryption scheme ### Public key encryption scheme
Idea: The recipient Bob distributes opened Bob-locks Ideas: The recipient Bob distributes opened Bob-locks
- Once closed, only Bob can open it. - Once closed, only Bob can open it.

View File

@@ -110,7 +110,7 @@ $\{p\gets \tilde{\Pi_n};y\gets Gen_q;a,b,\bold{z}\gets \mathbb{Z}_q:(p,y,y^a,y^b
So DDH assumption implies discrete logarithm assumption. So DDH assumption implies discrete logarithm assumption.
Idea: Ideas:
If one can find $a,b$ from $y^a,y^b$, then one can find $ab$ from $y^{ab}$ and compare to $\bold{z}$ to check whether $y^\bold{z}$ is a valid DDH tuple. If one can find $a,b$ from $y^a,y^b$, then one can find $ab$ from $y^{ab}$ and compare to $\bold{z}$ to check whether $y^\bold{z}$ is a valid DDH tuple.

View File

@@ -28,7 +28,7 @@ This is not more than one-time secure since the adversary can ask oracle for $Si
We will show it is one-time secure We will show it is one-time secure
Idea of proof: Ideas of proof:
Say their query is $Sign_{sk}(0^n)$ and reveals $pk_0$. Say their query is $Sign_{sk}(0^n)$ and reveals $pk_0$.

View File

@@ -104,7 +104,7 @@ One-time secure:
Then ($Gen',Sign',Ver'$) is one-time secure. Then ($Gen',Sign',Ver'$) is one-time secure.
Idea of Proof: Ideas of Proof:
If the digital signature scheme ($Gen',Sign',Ver'$) is not one-time secure, then there exists an adversary $\mathcal{A}$ which can ask oracle for one signature on $m_1$ and receive $\sigma_1=Sign'_{sk'}(m_1)=Sign_{sk}(h_i(m_1))$. If the digital signature scheme ($Gen',Sign',Ver'$) is not one-time secure, then there exists an adversary $\mathcal{A}$ which can ask oracle for one signature on $m_1$ and receive $\sigma_1=Sign'_{sk'}(m_1)=Sign_{sk}(h_i(m_1))$.

View File

@@ -2,9 +2,9 @@
## Relations between series and topology (compactness, closure, etc.) ## Relations between series and topology (compactness, closure, etc.)
Limit points $E'=\{x\in\mathbb{R}:\forall r>0, B_r(x)\backslash\{x\}\cap E\neq\emptyset\}$ Limit points $E'=\{x\in\mathbb{R}:\forall r>0, B_r(x)\backslash\{x\}\cap E\neq\phi\}$
Closure $\overline{E}=E\cup E'=\{x\in\mathbb{R}:\forall r>0, B_r(x)\cap E\neq\emptyset\}$ Closure $\overline{E}=E\cup E'=\{x\in\mathbb{R}:\forall r>0, B_r(x)\cap E\neq\phi\}$
$p_n\to p\implies \forall \epsilon>0, \exists N$ such that $\forall n\geq N, p_n\in B_\epsilon(p)$ $p_n\to p\implies \forall \epsilon>0, \exists N$ such that $\forall n\geq N, p_n\in B_\epsilon(p)$
@@ -24,7 +24,7 @@ Rudin Proof:
Rudin's proof uses a fact from Chapter 2. Rudin's proof uses a fact from Chapter 2.
If $E$ is compact, and $S\subseteq E$ is infinite, then $S$ has a limit point in $E$ ($S'\cap E\neq\emptyset$). If $E$ is compact, and $S\subseteq E$ is infinite, then $S$ has a limit point in $E$ ($S'\cap E\neq\phi$).
## Examples of Cauchy sequence that does not converge ## Examples of Cauchy sequence that does not converge

View File

@@ -30,7 +30,7 @@ A 2-cell is a set of the form $[a_1,b_1]\times[a_2,b_2]$
Theorem 2.38 replace with "closed and bounded intervals" to "k-cells". Theorem 2.38 replace with "closed and bounded intervals" to "k-cells".
Idea of Proof: Ideas of Proof:
Apply the Theorem to each dimension separately. Apply the Theorem to each dimension separately.

View File

@@ -146,7 +146,7 @@ This proves the claim.
By definition of supremum, the claim implies that $\forall \epsilon>0$, $diam(\overline{E})\leq 2\epsilon+diam E$. So $diam(\overline{E})\leq diam E$. By definition of supremum, the claim implies that $\forall \epsilon>0$, $diam(\overline{E})\leq 2\epsilon+diam E$. So $diam(\overline{E})\leq diam E$.
(b) By **Theorem 2.36**, $\bigcap_{n=1}^{\infty}K_n\neq \emptyset$. Suppose for contradiction that there are at least two distinct points $p,q\in \bigcap_{n=1}^{\infty}K_n$. Then for all $n\in \mathbb{N}$, $x,y\in K_n$ so $diam K_n\geq d(p,q)>0$. Then diameter of $K_n$ does not converge to 0. (b) By **Theorem 2.36**, $\bigcap_{n=1}^{\infty}K_n\neq \phi$. Suppose for contradiction that there are at least two distinct points $p,q\in \bigcap_{n=1}^{\infty}K_n$. Then for all $n\in \mathbb{N}$, $x,y\in K_n$ so $diam K_n\geq d(p,q)>0$. Then diameter of $K_n$ does not converge to 0.
EOP EOP

View File

@@ -78,7 +78,7 @@ So if $\limsup_{n\to\infty} t_n \leq \limsup_{n\to\infty} s_n$, then $\lim_{n\to
Now we will show $\limsup_{n\to\infty} t_n \geq e$. Now we will show $\limsup_{n\to\infty} t_n \geq e$.
Idea: (special case of the argument) Ideas: (special case of the argument)
If $n\geq 2$, then If $n\geq 2$, then

View File

@@ -230,7 +230,7 @@ $$
\sum_{n=0}^\infty a_n=\sum_{n=0}^\infty a_{f(n)} \sum_{n=0}^\infty a_n=\sum_{n=0}^\infty a_{f(n)}
$$ $$
Idea of proof: Ideas of proof:
Let $f:\mathbb{N}\to \mathbb{N}$ be a bijection. Let $f:\mathbb{N}\to \mathbb{N}$ be a bijection.

View File

@@ -1 +1,188 @@
# Lecture 24 # Lecture 24
## Reviews
Let $f: X\to Y$. Consider the following statement:
"$f$ is continuous $\iff$ for every open set $V\in Y$, $f^{-1}(V)$ is open in $X$."
1. To give a direct proof of the $\implies$ direction, what must be the first few steps be?
2. To give a direct proof of the $\impliedby$ direction, what must be the first few steps be?
3. Try to complete the proofs of both directions.
> A function $f:X\to Y$ is continuous if $\forall p\in X$, $\forall \epsilon > 0$, $\exists \delta > 0$ such that $f(B_\delta(p))\subset B_\epsilon(f(p))$. (_For every point in a ball of $B_\delta(p)$, there is a ball of $B_\epsilon(f(p))$ that contains the image of the point._)
>
> A set $V\subset Y$ is open if $\forall q\in V$, $\exists r>0$ such that $B_r(q)\subset V$.
## New materials
### Continuity and open sets
#### Theorem 4.8
A function $f:X\to Y$ is continuous if and only if for every open set $V\subset Y$, $f^{-1}(V)$ is open in $X$.
Proof:
$\implies$: Suppose $f$ is continuous. Let $V\subset Y$ be open. Let $p\in f^{-1}(V)$. Since $f(p)\in V$, $\exists \epsilon > 0$ such that $B_\epsilon(f(p))\subset V$.
Since $f$ is continuous, $\exists \delta > 0$ such that $f(B_\delta(p))\subset B_\epsilon(f(p))\subset V$. Therefore, $B_\delta(p)\subset f^{-1}(V)$. This shows that $f^{-1}(V)$ is open.
$\impliedby$: Suppose for every open set $V\subset Y$, $f^{-1}(V)$ is open in $X$. Let $p\in X$ and $\epsilon > 0$. Let $B_\epsilon(f(p))\in V$. Then $f^{-1}(B_\epsilon(f(p)))$ is open in $X$.
Since $p\in f^{-1}(B_\epsilon(f(p)))$ and $f^{-1}(B_\epsilon(f(p)))$ is open, $\exists \delta > 0$ such that $B_\delta(p)\subset f^{-1}(B_\epsilon(f(p)))$. Therefore, $f(B_\delta(p))\subset B_\epsilon(f(p))$. This shows that $f$ is continuous.
EOP
#### Corollary 4.8
$f$ is continuous if and only if for every closed set $C\subset Y$, $f^{-1}(C)$ is closed in $X$.
Ideas of proof:
- $C$ closed in $Y\iff Y\backslash C$ open in $Y$
- $f^{-1}(C)$ closed in $X\iff f^{-1}(Y\backslash C)$ open in $X$
- $f^{-1}(Y\backslash C) = X\backslash f^{-1}(C)$
Continue this proof by yourself.
#### Theorem 4.7
Composition of continuous functions is continuous.
Suppose $X,Y,Z$ are metric spaces, $E\subset X$, $f:E\to Y$ is continuous, and $g:Y\to Z$ is continuous. Then $g\circ f:E\to Z$ is continuous.
Ideas of proof:
- Let $B_\epsilon(g(f(p)))\subset Z$
- $g(f(B_\delta(p)))\subset B_\epsilon(g(f(p)))$
- $f(B_\delta(p))$ is open in $Y$
- $g^{-1}(B_\epsilon(g(f(p)))$ is open in $Y$
- $(g\circ f)^{-1}(B_\epsilon(g(f(p)))) = f^{-1}(g^{-1}(B_\epsilon(g(f(p)))))$
- $f^{-1}(g^{-1}(B_\epsilon(g(f(p))))) = (g\circ f)^{-1}(B_\epsilon(g(f(p))))$
Apply Theorem 4.8 to complete the proof.
#### Theorem 4.9
For $f:X\to \mathbb{C},g:X\to \mathbb{C}$ are continuous, then, $f+g,f/g$ are continuous.
Ideas of proof:
We can reduce this theorem to Theorem about limits and apply what you learned in chapter 3.
#### Examples of continuous functions 4.11
> $\forall p\in \mathbb{R}$, $\forall \epsilon > 0$, $\exists \delta > 0$ such that $\forall x\in \mathbb{R}$, $|x-p|<\delta\implies |f(x)-f(p)|<\epsilon$.
(a). $f(x) = \mathbb{R}\to \mathbb{R},f(x) = x$ is continuous. boring.
Proof:
Let $p\in \mathbb{R}$ and $\epsilon > 0$. Let $\delta = \epsilon$. Then, $\forall x\in \mathbb{R}$, if $|x-p|<\delta$, then $|f(x)-f(p)| = |x-p| < \delta = \epsilon$.
EOP
Therefore, by **Theorem 4.9**, $f(x) = x^2$ is continuous. $f(x) = x^3$ is continuous... So all polynomials are continuous.
(b). $f:\mathbb{R}^k\to \mathbb{R},f(x)=|x|$ is continuous.
Ideas of proof:
- $|f(x)-f(p)| = ||x|-|p||\leq |x-p|$ By reverse triangle inequality.
- Let $\epsilon > 0$. Let $\delta = \epsilon$.
### Continuity and compactness
#### Theorem 4.13
A mapping of $f$ of a set $E$ into a metric space $Y$ is said to be **bounded** if there is a real number $M$ such that $|f(x)|\leq M$ for all $x\in E$.
#### Theorem 4.14
$f:X\to Y$ is continuous. If $X$ is compact, then $f(X)$ is compact.
Proof strategy:
For every open cover $\{V_\alpha\}_{\alpha\in A}$ of $f(X)$, there exists a corresponding open cover $\{f^{-1}(V_\alpha)\}_{\alpha\in A}$ of $X$.
Since $X$ is compact, there exists a finite subcover $\{f^{-1}(V_\alpha)\}_{\alpha\in A}$ of $X$. Let the finite subcover be $\{f^{-1}(V_\alpha)\}_{i=1}^n$.
Then, $\{V_\alpha\}_{i=1}^n$ is a finite subcover of $\{V_\alpha\}_{\alpha\in A}$ of $f(X)$.
See the detailed proof in the textbook.
#### Theorem 4.16 (Extreme Value Theorem)
Suppose $X$ is a compact metric space and $f:X\to \mathbb{R}$ is continuous. Then $f$ has a maximum and a minimum on $X$.
i.e.
$$
\exists p_0,q_0\in X\text{ such that }f(p_0) = \sup f(X)\text{ and }f(q_0) = \inf f(X).
$$
Proof:
By Theorem 4.14, $f(X)$ is compact.
By Theorem 2.41, $f(X)$ is closed and bounded.
By Theorem 2.28, $\sup f(X)$ and $\inf f(X)$ exist and are in $f(X)$. Let $p_0\in X$ such that $f(p_0) = \sup f(X)$. Let $q_0\in X$ such that $f(q_0) = \inf f(X)$.
EOP
### Continuity and connectedness
> **Definition 2.45**: Let $X$ be a metric space. $A,B\subset X$ are **separated** if $\overline{A}\cap B = \phi$ and $\overline{B}\cap A = \phi$.
>
> $E\subset X$ is **disconnected** if there exist two separated sets $A$ and $B$ such that $E = A\cup B$.
>
> $E\subset X$ is **connected** if $E$ is not disconnected.
#### Theorem 4.22
$f:X\to Y$ is continuous, $E\subset X$. If $E$ is connected, then $f(E)$ is connected.
Proof:
We will prove the contrapositive statement: if $f(E)$ is disconnected, then $E$ is disconnected.
Suppose $f(E)$ is disconnected. Then there exist two separated sets $A$ and $B\in Y$ such that $f(E) = A\cup B$.
Let $G = f^{-1}(A)\cap E$ and $H = f^{-1}(B)\cap E$.
We have:
$f(E)=A\cup B\implies E = G\cup H$
Since $A$ and $B$ are nonempty, $A,B\subset f(E)$, this implies that $G$ and $H$ are nonempty.
To complete the proof, we need to show $\overline{G}\cap H = \phi$ and $\overline{H}\cap G = \phi$.
We have $G\subset f^{-1}(A)\cap E\subset f^{-1}(A)\subset f^{-1}(\overline{A})$ Since $\overline{A}$ is closed, $f^{-1}(\overline{A})$ is closed. This implies that $\overline{G}\subset f^{-1}(\overline{A})$.
So $\overline{G}\subset f^{-1}(\overline{A})$ and $\overline{H}\subset f^{-1}(\overline{B})$.
Since $A$ and $B$ are separated, $\overline{A}\cap B = \phi$ and $\overline{B}\cap A = \phi$.
Therefore, $\overline{G}\cap H = \phi$ and $\overline{H}\cap G = \phi$.
EOP
#### Theorem 4.23 (Intermediate Value Theorem)
Let $f:[a,b]\to \mathbb{R}$ be continuous. If $c$ is a real number between $f(a)$ and $f(b)$, then there exists a point $x\in [a,b]$ such that $f(x) = c$.
Ideas of proof:
Use Theorem 2.47. A subset $E$ of $\mathbb{R}$ is connected if and only if it has the following property: if $x,y\in E$ and $x<z<y$, then $z\in E$.
Since $[a,b]$ is connected, by **Theorem 4.22**, $f([a,b])$ is connected.
$f(a)$ and $f(b)$ are real numbers in $f([a,b])$, and $c$ is a real number between $f(a)$ and $f(b)$.
By **Theorem 2.47**, $c\in f([a,b])$.
EOP

View File

@@ -1 +0,0 @@
# Lecture 26

View File

@@ -1 +0,0 @@
# Lecture 27

View File

@@ -1 +0,0 @@
# Lecture 28

View File

@@ -26,22 +26,7 @@ export default {
Math4111_L20: "Lecture 20", Math4111_L20: "Lecture 20",
Math4111_L21: "Lecture 21", Math4111_L21: "Lecture 21",
Math4111_L22: "Lecture 22", Math4111_L22: "Lecture 22",
Math4111_L23: { Math4111_L23: "Lecture 23",
display: 'hidden' Math4111_L24: "Lecture 24",
}, Math4111_L25: "Lecture 25"
Math4111_L24: {
display: 'hidden'
},
Math4111_L25: {
display: 'hidden'
},
Math4111_L26: {
display: 'hidden'
},
Math4111_L27: {
display: 'hidden'
},
Math4111_L28: {
display: 'hidden'
}
} }

View File

@@ -35,7 +35,7 @@ Suppose $V,W$ are finite dimensional with $dim(V)>dim(W)$, then there are no inj
Suppose $V,W$ are finite dimensional with $dim(V)<dim(W)$, then there are no surjective maps from $V$ to $W$. Suppose $V,W$ are finite dimensional with $dim(V)<dim(W)$, then there are no surjective maps from $V$ to $W$.
ideas of Proof: relies on **Theorem 3.21** $dim(null(T))>0$ Ideas of Proof: relies on **Theorem 3.21** $dim(null(T))>0$
### Linear Maps and Linear Systems 3EX-1 ### Linear Maps and Linear Systems 3EX-1