update typo and structures

This commit is contained in:
Trance-0
2024-12-16 13:41:24 -06:00
parent ce830c9943
commit d471db49c4
24 changed files with 328 additions and 219 deletions

View File

@@ -1,11 +1,11 @@
# Lecture 12
## Continue on pseudo-randomness
## Chapter 3: Indistinguishability and Pseudorandomness
$\{X_n\}$ and $\{Y_n\}$ are distinguishable by $\mu(n)$ if $\exists$ distinguisher $D$
$\{X_n\}$ and $\{Y_n\}$ are distinguishable by $\mu(n)$ if $\exists$ distinguisher $\mathcal{D}$
$$
|P[x\gets X_n:D(x)=1]-P[y\gets Y_n:D(y)=1]|\geq \mu(n)
|P[x\gets X_n:\mathcal{D}(x)=1]-P[y\gets Y_n:\mathcal{D}(y)=1]|\geq \mu(n)
$$
- If $\mu(n)\geq \frac{1}{p(n)}\gets poly(n)$ for infinitely many n, then $\{X_n\}$ and $\{Y_n\}$ are distinguishable.
@@ -19,15 +19,15 @@ If $\{X_n\}\approx\{Y_n\}$, then so are $\{M(X_n)\}\approx\{M(Y_n)\}$
Proof:
If $D$ distinguishes $M(X_n)$ and $M(Y_n)$ by $\mu(n)$ then $D(M(\cdot))$ is also a polynomial-time distinguisher of $X_n,Y_n$.
If $\mathcal{D}$ distinguishes $M(X_n)$ and $M(Y_n)$ by $\mu(n)$ then $\mathcal{D}(M(\cdot))$ is also a polynomial-time distinguisher of $X_n,Y_n$.
### Hybrid Lemma
Let $X^0_n,X^1_n,\dots,X^m_n$ are ensembles indexed from $1,..,m$
If $D$ distinguishes $X_n^0$ and $X_n^m$ by $\mu(n)$, then $\exists i,1\leq i\leq m$ where $X_{n}^{i-1}$ and $X_n^i$ are distinguished by $D$ by $\frac{\mu(n)}{m}$
If $\mathcal{D}$ distinguishes $X_n^0$ and $X_n^m$ by $\mu(n)$, then $\exists i,1\leq i\leq m$ where $X_{n}^{i-1}$ and $X_n^i$ are distinguished by $\mathcal{D}$ by $\frac{\mu(n)}{m}$
Proof: (we use triangle inequality.) Let $p_i=P[t\gets X_n^i:D(t)=1],0\leq i\leq m$. We have $|p_0-p_m|\geq m(n)$
Proof: (we use triangle inequality.) Let $p_i=P[t\gets X_n^i:\mathcal{D}(t)=1],0\leq i\leq m$. We have $|p_0-p_m|\geq m(n)$
Using telescoping tricks:
@@ -46,7 +46,7 @@ If $X^0_n$ and $X^m_n$ are distinguishable by $\frac{1}{p(n)}$, then $2$ inner "
Example:
For some Brian in Week 1 and Week 50, a distinguisher $D$ outputs 1 if hair is considered "long".
For some Brian in Week 1 and Week 50, a distinguisher $\mathcal{D}$ outputs 1 if hair is considered "long".
There is some week $i,1\leq i\leq 50$ $|p_{i-1}-p_i|\geq 0.02$
@@ -74,7 +74,7 @@ $$
P[t\gets X_n:\mathcal{A}(t_1,t_2,...,t_i)=t_{i+1}]\leq \frac{1}{2}+\epsilon(n)
$$
We can build a distinguisher $D$ from $\mathcal{A}$.
We can build a distinguisher $\mathcal{D}$ from $\mathcal{A}$.
The converse if True!
@@ -95,13 +95,13 @@ $$
If $\{X_n\}$ were not pseudorandom, there is a $D$
$$
|P[x\gets X_n:D(x)=1]-P[u\gets U_{l(n)}:D(u)=1]|=\mu(n)\geq \frac{1}{p(n)}
|P[x\gets X_n:\mathcal{D}(x)=1]-P[u\gets U_{l(n)}:\mathcal{D}(u)=1]|=\mu(n)\geq \frac{1}{p(n)}
$$
By hybrid lemma, there is $i,1\leq i\leq l(n)$ where:
$$
|P[t\gets H^{i-1}:D(t)=1]-P[t\gets H^i:D(t)=1]|\geq \frac{1}{p(n)l(n)}=\frac{1}{poly(n)}
|P[t\gets H^{i-1}:\mathcal{D}(t)=1]-P[t\gets H^i:\mathcal{D}(t)=1]|\geq \frac{1}{p(n)l(n)}=\frac{1}{poly(n)}
$$
$l(n)$ is the step we need to take transform $X$ to $X^n$
@@ -115,9 +115,9 @@ $$
notice that only two bits are distinguished in the procedure.
D can distinguish $x_{i+1}$ from a truly random $U_{i+1}$, knowing the first $i$ bits $x_i\dots x_i$ came from $x\gets x_n$
$\mathcal{D}$ can distinguish $x_{i+1}$ from a truly random $U_{i+1}$, knowing the first $i$ bits $x_i\dots x_i$ came from $x\gets x_n$
So $D$ can predict $x_{i+1}$ from $x_1\dots x_i$ (contradicting with that $X$ passes NBT)
So $\mathcal{D}$ can predict $x_{i+1}$ from $x_1\dots x_i$ (contradicting with that $X$ passes NBT)
EOP
@@ -147,6 +147,6 @@ $f(x)||x$
Not all bits of $x$ would be hard to predict.
**Hard-core bit:** One bit of information about $x$ which is hard to determine from $f(x)$. $P[$ success $]\leq \frac{1}{2}+\epsilon(n)$
**Hard-core bit:** One bit of information about $x$ which is hard to determine from $f(x)$. $P[\text{success}]\leq \frac{1}{2}+\epsilon(n)$
Depends on $f(x)$