upgrade structures and migrate to nextra v4

This commit is contained in:
Zheyuan Wu
2025-07-06 12:40:25 -05:00
parent 76e50de44d
commit 717520624d
317 changed files with 18143 additions and 22777 deletions

View File

@@ -0,0 +1,114 @@
# Lecture 11
Exam info posted tonight.
## Chapter 3: Indistinguishability and pseudo-randomness
### Pseudo-randomness
Idea: **Efficiently** produce many bits
which "appear" truly random.
#### One-time pad
$m\in\{0,1\}^n$
$Gen(1^n):k\gets \{0,1\}^N$
$Enc_k(m)=m\oplus k$
$Dec_k(c)=c\oplus k$
Advantage: Perfectly secret
Disadvantage: Impractical
The goal of pseudo-randomness is to make the algorithm, computationally secure, and practical.
Let $\{X_n\}$ be a sequence of distributions over $\{0,1\}^{l(n)}$, where $l(n)$ is a polynomial of $n$.
"Probability ensemble"
Example:
Let $U_n$ be the uniform distribution over $\{0,1\}^n$
For all $x\in \{0,1\}^n$
$P[x\gets U_n]=\frac{1}{2^n}$
For $1\leq i\leq n$, $P[x_i=1]=\frac{1}{2}$
For $1\leq i<j\leq n,P[x_i=1 \textup{ and } x_j=1]=\frac{1}{4}$ (by independence of different bits.)
Let $\{X_n\}_n$ and $\{Y_n\}_n$ be probability ensembles (separate of dist over $\{0,1\}^{l(n)}$)
$\{X_n\}_n$ and $\{Y_n\}_n$ are computationally **in-distinguishable** if for all non-uniform p.p.t adversary $\mathcal{D}$ ("distinguishers")
$$
|P[x\gets X_n:\mathcal{D}(x)=1]-P[y\gets Y_n:\mathcal{D}(y)=1]|<\epsilon(n)
$$
this basically means that the probability of finding any pattern in the two array is negligible.
If there is a $\mathcal{D}$ such that
$$
|P[x\gets X_n:\mathcal{D}(x)=1]-P[y\gets Y_n:\mathcal{D}(y)=1]|\geq \mu(n)
$$
then $\mathcal{D}$ is distinguishing with probability $\mu(n)$
If $\mu(n)\geq\frac{1}{p(n)}$, then $\mathcal{D}$ is distinguishing the two $\implies X_n\cancel{\approx} Y_n$
### Prediction lemma
$X_n^0$ and $X_n^1$ ensembles over $\{0,1\}^{l(n)}$
Suppose $\exists$ distinguisher $\mathcal{D}$ which distinguish by $\geq \mu(n)$. Then $\exists$ adversary $\mathcal{A}$ such that
$$
P[b\gets\{0,1\};t\gets X_n^b]:\mathcal{A}(t)=b]\geq \frac{1}{2}+\frac{\mu(n)}{2}
$$
Proof:
Without loss of generality, suppose
$$
P[t\gets X^1_n:\mathcal{D}(t)=1]-P[t\gets X_n^0:\mathcal{D}(t)=1]\geq \mu(n)
$$
$\mathcal{A}=\mathcal{D}$ (Outputs 1 if and only if $D$ outputs 1, otherwise 0.)
$$
\begin{aligned}
&~~~~~P[b\gets \{0,1\};t\gets X_n^b:\mathcal{A}(t)=b]\\
&=P[t\gets X_n^1;\mathcal{A}=1]\cdot P[b=1]+P[t\gets X_n^0;\mathcal{A}(t)=0]\cdot P[b=0]\\
&=\frac{1}{2}P[t\gets X_n^1;\mathcal{A}(t)=1]+\frac{1}{2}(1-P[t\gets X_n^0;\mathcal{A}(t)=1])\\
&=\frac{1}{2}+\frac{1}{2}(P[t\gets X_n^1;\mathcal{A}(t)=1]-P[t\gets X_n^0;\mathcal{A}(t)=1])\\
&\geq\frac{1}{2}+\frac{1}{2}\mu(n)\\
\end{aligned}
$$
### Pseudo-random
$\{X_n\}$ over $\{0,1\}^{l(n)}$ is **pseudorandom** if $\{X_n\}\approx\{U_{l(n)}\}$. i.e. indistinguishable from the true randomness.
Example:
Building distinguishers
1. $X_n$: always outputs $0^n$, $\mathcal{D}$: [outputs $1$ if $t=0^n$]
$$
\vert P[t\gets X_n:\mathcal{D}(t)=1]-P[t\gets U_n:\mathcal{D}(t)=1]\vert=1-\frac{1}{2^n}\approx 1
$$
2. $X_n$: 1st $n-1$ bits are truly random $\gets U_{n-1}$ nth bit is $1$ with probability 0.50001 and $0$ with 0.49999, $D$: [outputs $1$ if $X_n=1$]
$$
\vert P[t\gets X_n:\mathcal{D}(t)=1]-P[t\gets U_n:\mathcal{D}(t)=1]\vert=0.5001-0.5=0.001\neq 0
$$
3. $X_n$: For each bit $x_i\gets\{0,1\}$ **unless** there have been 1 million $0$'s. in a row. Then outputs $1$, $D$: [outputs $1$ if $x_1=x_2=...=x_{1000001}=0$]
$$
\vert P[t\gets X_n:\mathcal{D}(t)=1]-P[t\gets U_n:\mathcal{D}(t)=1]\vert=|0-\frac{1}{2^{1000001}}|\neq 0
$$