update typo and structures

This commit is contained in:
Trance-0
2024-12-16 13:41:24 -06:00
parent ce830c9943
commit d471db49c4
24 changed files with 328 additions and 219 deletions

View File

@@ -1,42 +1,42 @@
# Lecture 1
> I changed all the element in set to lowercase letters. I don't know why K is capitalized.
## Chapter 1: Introduction
## Alice sending information to Bob
### Alice sending information to Bob
Assuming _Eve_ can always listen
Rule 1. Message, Encryption to Code and Decryption to original Message.
## Kerckhoffs' principle
### Kerckhoffs' principle
It states that the security of a cryptographic system shouldn't rely on the secrecy of the algorithm (Assuming Eve knows how everything works.)
**Security is due to the security of the key.**
## Private key encryption scheme
### Private key encryption scheme
Let $\mathcal{M}$ be the set of message that Alice will send to Bob. (The message space) "plaintext"
Let $M$ be the set of message that Alice will send to Bob. (The message space) "plaintext"
Let $\mathcal{K}$ be the set of key that will ever be used. (The key space)
Let $K$ be the set of key that will ever be used. (The key space)
$Gen$ be the key generation algorithm.
$k\gets Gen(\mathcal{K})$
$k\gets Gen(K)$
$c\gets Enc_k(m)$ denotes cipher encryption.
$m'\gets Dec_k(c')$ $m'$ might be null for incorrect $c'$.
$Pr[K\gets \mathcal{K}:Dec_k(Enc_k(M))=m]=1$ The probability of decryption of encrypted message is original message is 1.
$P[k\gets K:Dec_k(Enc_k(M))=m]=1$ The probability of decryption of encrypted message is original message is 1.
*_in some cases we can allow the probailty not be 1_
*_in some cases we can allow the probability not be 1_
## Some examples of crypto system
### Some examples of crypto system
Let $\mathcal{M}=$ {all five letter strings}.
Let $M=\text{all five letter strings}$.
And $\mathcal{K}=$ {1-$10^{10}$}
And $K=[1,10^{10}]$
Example:
@@ -48,13 +48,13 @@ $Dec_{1234567890}(brion1234567890)="brion"$
Seems not very secure but valid crypto system.
## Early attempts for crypto system.
### Early attempts for crypto system
### Caesar cipher
#### Caesar cipher
$\mathcal{M}=$ finite string of texts
$M=\text{finite string of texts}$
$\mathcal{K}=$ {1-26}
$K=[1,26]$
$Enc_k=[(i+K)\% 26\ for\ i \in m]=c$
@@ -68,11 +68,11 @@ def caesar_cipher_dec(s: str, k:int):
return ''.join([chr((ord(i)-ord('a')+26-k)%26+ord('a')) for i in s])
```
### Substitution cipher
#### Substitution cipher
$\mathcal{M}=$ finite string of texts
$M=\text{finite string of texts}$
$\mathcal{K}=$ bijective linear transformations (for English alphabet, $|\mathcal{K}|=26!$)
$K=\text{set of all bijective linear transformations (for English alphabet},|K|=26!\text{)}$
$Enc_k=[iK\ for\ i \in m]=c$
@@ -80,11 +80,11 @@ $Dec_k=[iK^{-1}\ for\ i \in c]$
Fails to frequency analysis
### Vigenere Cipher
#### Vigenere Cipher
$\mathcal{M}=$ finite string of texts
$M=\text{finite string of texts with length }m$
$\mathcal{K}=$ key phrase of a fixed length
$K=\text{[0,26]}^n$ (assuming English alphabet)
```python
def viginere_cipher_enc(s: str, k: List[int]):
@@ -106,6 +106,22 @@ def viginere_cipher_dec(s: str, k: List[int]):
return res
```
### One time pad
#### One time pad
Completely random string, sufficiently long.
$M=\text{finite string of texts with length }n$
$K=\text{[0,26]}^n$ (assuming English alphabet)$
$Enc_k=m\oplus k$
$Dec_k=c\oplus k$
```python
def one_time_pad_enc(s: str, k: List[int]):
return ''.join([chr((ord(i)-ord('a')+k[j])%26+ord('a')) for j,i in enumerate(s)])
def one_time_pad_dec(s: str, k: List[int]):
return ''.join([chr((ord(i)-ord('a')+26-k[j])%26+ord('a')) for j,i in enumerate(s)])
```

View File

@@ -1,20 +1,23 @@
# Lecture 10
## Continue
## Chapter 2: Computational Hardness
### Discrete Log Assumption
### Discrete Log Assumption (Assumption 52.2)
This is collection of one-way functions
$$
p\gets \tilde\Pi_n(\textup{ safe primes }), p=2q+1
$$
$$
a\gets \mathbb{Z}*_{p};g=a^2(\textup{ make sure }g\neq 1)
$$
$$
f_{g,p}(x)=g^x\mod p
$$
$$
f:\mathbb{Z}_q\to \mathbb{Z}^*_p
$$
@@ -35,7 +38,7 @@ $$
P[p,q\gets \Pi_n;N\gets p\cdot q;e\gets \mathbb{Z}_{\phi(N)}^*;y\gets \mathbb{N}_n;x\gets \mathcal{A}(N,e,y);x^e=y\mod N]<\epsilon(n)
$$
#### Theorem RSA Algorithm
#### Theorem 53.2 (RSA Algorithm)
This is a collection of one-way functions
@@ -175,6 +178,15 @@ So the probability of B succeeds is equal to A succeeds, which $>\frac{1}{p(n)}$
Remaining question: Can $x$ be found without factoring $N$? $y=x^e\mod N$
### One-way permutation (Definition 55.1)
A collection function $\mathcal{F}=\{f_i:D_i\to R_i\}_{i\in I}$ is a one-way permutation if
1. $\forall i,f_i$ is a permutation
2. $\mathcal{F}$ is a collection of one-way functions
_basically, a one-way permutation is a collection of one-way functions that maps $\{0,1\}^n$ to $\{0,1\}^n$ in a bijection way._
### Trapdoor permutations
Idea: $f:D\to R$ is a one-way permutation.
@@ -196,4 +208,3 @@ $\mathcal{F}=\{f_i:D_i\to R_i\}_{i\in I}$
#### Theorem RSA is a trapdoor
RSA collection of trapdoor permutation with factorization $(p,q)$ of $N$, or $\phi(N)$, as trapdoor info $f$.

View File

@@ -2,13 +2,15 @@
Exam info posted tonight.
## Pseudo-randomness
## Chapter 3: Indistinguishability and pseudo-randomness
### Pseudo-randomness
Idea: **Efficiently** produce many bits
which "appear" truly random.
### One-time pad
#### One-time pad
$m\in\{0,1\}^n$
@@ -42,29 +44,29 @@ For $1\leq i<j\leq n,P[x_i=1 \textup{ and } x_j=1]=\frac{1}{4}$ (by independence
Let $\{X_n\}_n$ and $\{Y_n\}_n$ be probability ensembles (separate of dist over $\{0,1\}^{l(n)}$)
$\{X_n\}_n$ and $\{Y_n\}_n$ are computationally **in-distinguishable** if for all non-uniform p.p.t adversary $D$ ("distinguishers")
$\{X_n\}_n$ and $\{Y_n\}_n$ are computationally **in-distinguishable** if for all non-uniform p.p.t adversary $\mathcal{D}$ ("distinguishers")
$$
|P[x\gets X_n:D(x)=1]-P[y\gets Y_n:D(y)=1]|<\epsilon(n)
|P[x\gets X_n:\mathcal{D}(x)=1]-P[y\gets Y_n:\mathcal{D}(y)=1]|<\epsilon(n)
$$
this basically means that the probability of finding any pattern in the two array is negligible.
If there is a $D$ such that
If there is a $\mathcal{D}$ such that
$$
|P[x\gets X_n:D(x)=1]-P[y\gets Y_n:D(y)=1]|\geq \mu(n)
|P[x\gets X_n:\mathcal{D}(x)=1]-P[y\gets Y_n:\mathcal{D}(y)=1]|\geq \mu(n)
$$
then $D$ is distinguishing with probability $\mu(n)$
then $\mathcal{D}$ is distinguishing with probability $\mu(n)$
If $\mu(n)\geq\frac{1}{p(n)}$, then $D$ is distinguishing the two $\implies X_n\cancel{\approx} Y_n$
If $\mu(n)\geq\frac{1}{p(n)}$, then $\mathcal{D}$ is distinguishing the two $\implies X_n\cancel{\approx} Y_n$
### Prediction lemma
$X_n^0$ and $X_n^1$ ensembles over $\{0,1\}^{l(n)}$
Suppose $\exists$ distinguisher $D$ which distinguish by $\geq \mu(n)$. Then $\exists$ adversary $\mathcal{A}$ such that
Suppose $\exists$ distinguisher $\mathcal{D}$ which distinguish by $\geq \mu(n)$. Then $\exists$ adversary $\mathcal{A}$ such that
$$
P[b\gets\{0,1\};t\gets X_n^b]:\mathcal{A}(t)=b]\geq \frac{1}{2}+\frac{\mu(n)}{2}
@@ -75,7 +77,7 @@ Proof:
Without loss of generality, suppose
$$
P[t\gets X^1_n:D(t)=1]-P[t\gets X_n^0:D(t)=1]\geq \mu(n)
P[t\gets X^1_n:\mathcal{D}(t)=1]-P[t\gets X_n^0:\mathcal{D}(t)=1]\geq \mu(n)
$$
$\mathcal{A}=\mathcal{D}$ (Outputs 1 if and only if $D$ outputs 1, otherwise 0.)
@@ -98,15 +100,15 @@ Example:
Building distinguishers
1. $X_n$: always outputs $0^n$, $D$: [outputs $1$ if $t=0^n$]
1. $X_n$: always outputs $0^n$, $\mathcal{D}$: [outputs $1$ if $t=0^n$]
$$
\vert P[t\gets X_n:D(t)=1]-P[t\gets U_n:D(t)=1]\vert=1-\frac{1}{2^n}\approx 1
\vert P[t\gets X_n:\mathcal{D}(t)=1]-P[t\gets U_n:\mathcal{D}(t)=1]\vert=1-\frac{1}{2^n}\approx 1
$$
2. $X_n$: 1st $n-1$ bits are truly random $\gets U_{n-1}$ nth bit is $1$ with probability 0.50001 and $0$ with 0.49999, $D$: [outputs $1$ if $X_n=1$]
$$
\vert P[t\gets X_n:D(t)=1]-P[t\gets U_n:D(t)=1]\vert=0.5001-0.5=0.001\neq 0
\vert P[t\gets X_n:\mathcal{D}(t)=1]-P[t\gets U_n:\mathcal{D}(t)=1]\vert=0.5001-0.5=0.001\neq 0
$$
3. $X_n$: For each bit $x_i\gets\{0,1\}$ **unless** there have been 1 million $0$'s. in a row. Then outputs $1$, $D$: [outputs $1$ if $x_1=x_2=...=x_{1000001}=0$]
$$
\vert P[t\gets X_n:D(t)=1]-P[t\gets U_n:D(t)=1]\vert=|0-\frac{1}{2^{1000001}}|\neq 0
\vert P[t\gets X_n:\mathcal{D}(t)=1]-P[t\gets U_n:\mathcal{D}(t)=1]\vert=|0-\frac{1}{2^{1000001}}|\neq 0
$$

View File

@@ -1,11 +1,11 @@
# Lecture 12
## Continue on pseudo-randomness
## Chapter 3: Indistinguishability and Pseudorandomness
$\{X_n\}$ and $\{Y_n\}$ are distinguishable by $\mu(n)$ if $\exists$ distinguisher $D$
$\{X_n\}$ and $\{Y_n\}$ are distinguishable by $\mu(n)$ if $\exists$ distinguisher $\mathcal{D}$
$$
|P[x\gets X_n:D(x)=1]-P[y\gets Y_n:D(y)=1]|\geq \mu(n)
|P[x\gets X_n:\mathcal{D}(x)=1]-P[y\gets Y_n:\mathcal{D}(y)=1]|\geq \mu(n)
$$
- If $\mu(n)\geq \frac{1}{p(n)}\gets poly(n)$ for infinitely many n, then $\{X_n\}$ and $\{Y_n\}$ are distinguishable.
@@ -19,15 +19,15 @@ If $\{X_n\}\approx\{Y_n\}$, then so are $\{M(X_n)\}\approx\{M(Y_n)\}$
Proof:
If $D$ distinguishes $M(X_n)$ and $M(Y_n)$ by $\mu(n)$ then $D(M(\cdot))$ is also a polynomial-time distinguisher of $X_n,Y_n$.
If $\mathcal{D}$ distinguishes $M(X_n)$ and $M(Y_n)$ by $\mu(n)$ then $\mathcal{D}(M(\cdot))$ is also a polynomial-time distinguisher of $X_n,Y_n$.
### Hybrid Lemma
Let $X^0_n,X^1_n,\dots,X^m_n$ are ensembles indexed from $1,..,m$
If $D$ distinguishes $X_n^0$ and $X_n^m$ by $\mu(n)$, then $\exists i,1\leq i\leq m$ where $X_{n}^{i-1}$ and $X_n^i$ are distinguished by $D$ by $\frac{\mu(n)}{m}$
If $\mathcal{D}$ distinguishes $X_n^0$ and $X_n^m$ by $\mu(n)$, then $\exists i,1\leq i\leq m$ where $X_{n}^{i-1}$ and $X_n^i$ are distinguished by $\mathcal{D}$ by $\frac{\mu(n)}{m}$
Proof: (we use triangle inequality.) Let $p_i=P[t\gets X_n^i:D(t)=1],0\leq i\leq m$. We have $|p_0-p_m|\geq m(n)$
Proof: (we use triangle inequality.) Let $p_i=P[t\gets X_n^i:\mathcal{D}(t)=1],0\leq i\leq m$. We have $|p_0-p_m|\geq m(n)$
Using telescoping tricks:
@@ -46,7 +46,7 @@ If $X^0_n$ and $X^m_n$ are distinguishable by $\frac{1}{p(n)}$, then $2$ inner "
Example:
For some Brian in Week 1 and Week 50, a distinguisher $D$ outputs 1 if hair is considered "long".
For some Brian in Week 1 and Week 50, a distinguisher $\mathcal{D}$ outputs 1 if hair is considered "long".
There is some week $i,1\leq i\leq 50$ $|p_{i-1}-p_i|\geq 0.02$
@@ -74,7 +74,7 @@ $$
P[t\gets X_n:\mathcal{A}(t_1,t_2,...,t_i)=t_{i+1}]\leq \frac{1}{2}+\epsilon(n)
$$
We can build a distinguisher $D$ from $\mathcal{A}$.
We can build a distinguisher $\mathcal{D}$ from $\mathcal{A}$.
The converse if True!
@@ -95,13 +95,13 @@ $$
If $\{X_n\}$ were not pseudorandom, there is a $D$
$$
|P[x\gets X_n:D(x)=1]-P[u\gets U_{l(n)}:D(u)=1]|=\mu(n)\geq \frac{1}{p(n)}
|P[x\gets X_n:\mathcal{D}(x)=1]-P[u\gets U_{l(n)}:\mathcal{D}(u)=1]|=\mu(n)\geq \frac{1}{p(n)}
$$
By hybrid lemma, there is $i,1\leq i\leq l(n)$ where:
$$
|P[t\gets H^{i-1}:D(t)=1]-P[t\gets H^i:D(t)=1]|\geq \frac{1}{p(n)l(n)}=\frac{1}{poly(n)}
|P[t\gets H^{i-1}:\mathcal{D}(t)=1]-P[t\gets H^i:\mathcal{D}(t)=1]|\geq \frac{1}{p(n)l(n)}=\frac{1}{poly(n)}
$$
$l(n)$ is the step we need to take transform $X$ to $X^n$
@@ -115,9 +115,9 @@ $$
notice that only two bits are distinguished in the procedure.
D can distinguish $x_{i+1}$ from a truly random $U_{i+1}$, knowing the first $i$ bits $x_i\dots x_i$ came from $x\gets x_n$
$\mathcal{D}$ can distinguish $x_{i+1}$ from a truly random $U_{i+1}$, knowing the first $i$ bits $x_i\dots x_i$ came from $x\gets x_n$
So $D$ can predict $x_{i+1}$ from $x_1\dots x_i$ (contradicting with that $X$ passes NBT)
So $\mathcal{D}$ can predict $x_{i+1}$ from $x_1\dots x_i$ (contradicting with that $X$ passes NBT)
EOP
@@ -147,6 +147,6 @@ $f(x)||x$
Not all bits of $x$ would be hard to predict.
**Hard-core bit:** One bit of information about $x$ which is hard to determine from $f(x)$. $P[$ success $]\leq \frac{1}{2}+\epsilon(n)$
**Hard-core bit:** One bit of information about $x$ which is hard to determine from $f(x)$. $P[\text{success}]\leq \frac{1}{2}+\epsilon(n)$
Depends on $f(x)$

View File

@@ -1,6 +1,10 @@
# Lecture 13
## Pseudorandom Generator (PRG)
## Chapter 3: Indistinguishability and Pseudorandomness
### Pseudorandom Generator (PRG)
#### Definition 77.1 (Pseudorandom Generator)
$G:\{0,1\}^n\to\{0,1\}^{l(n)}$ is a pseudorandom generator if the following is true:
@@ -8,7 +12,7 @@ $G:\{0,1\}^n\to\{0,1\}^{l(n)}$ is a pseudorandom generator if the following is t
2. $l(n)> n$ (expansion)
3. $\{x\gets \{0,1\}^n:G(x)\}_n\approx \{u\gets \{0,1\}^{l(n)}\}$
### Hard-core bit (predicate) (HCB)
#### Definition 78.3 (Hard-core bit (predicate) (HCB))
Hard-core bit (predicate) (HCB): $h:\{0,1\}^n\to \{0,1\}$ is a hard-core bit of $f:\{0,1\}^n\to \{0,1\}^*$ if for every adversary $A$,
@@ -131,7 +135,7 @@ $G'$ is a PRG:
1. Efficiently computable: since we are computing $G'$ by applying $G$ multiple times (polynomial of $l(n)$ times).
2. Expansion: $n<l(n)$.
3. Pseudorandomness: We proceed by contradiction. Suppose the output is not pseudorandom. Then there exists a distinguisher $D$ that can distinguish $G'$ from $U_{l(n)}$ with advantage $\frac{1}{2}+\epsilon(n)$.
3. Pseudorandomness: We proceed by contradiction. Suppose the output is not pseudorandom. Then there exists a distinguisher $\mathcal{D}$ that can distinguish $G'$ from $U_{l(n)}$ with advantage $\frac{1}{2}+\epsilon(n)$.
Strategy: use hybrid argument to construct distributions.
@@ -145,9 +149,9 @@ H^{l(n)}&=b_1b_2\cdots b_{l(n)}
\end{aligned}
$$
By the hybrid argument, there exists an $i$ such that $D$ can distinguish $H^i$ and $H^{i+1}$ $0\leq i\leq l(n)-1$ by $\frac{1}{p(n)l(n)}$
By the hybrid argument, there exists an $i$ such that $\mathcal{D}$ can distinguish $H^i$ and $H^{i+1}$ $0\leq i\leq l(n)-1$ by $\frac{1}{p(n)l(n)}$
Show that there exists $D$ for
Show that there exists $\mathcal{D}$ for
$$
\{u\gets U_{n+1}\}\text{ vs. }\{x\gets U_n;G(x)=u\}

View File

@@ -21,7 +21,7 @@ Back to the experiment we did long time ago:
So Group 1 is human, Group 2 is computer.
## New material
## Chapter 3: Indistinguishability and Pseudorandomness
### Computationally secure encryption

View File

@@ -1,6 +1,8 @@
# Lecture 15
## Random Function
## Chapter 3: Indistinguishability and Pseudorandomness
### Random Function
$F:\{0,1\}^n\to \{0,1\}^n$

View File

@@ -1,6 +1,6 @@
# Lecture 16
## Continue on PRG
## Chapter 3: Indistinguishability and Pseudorandomness
PRG exists $\implies$ Pseudorandom function family exists.
@@ -49,13 +49,13 @@ Pseudo random function familiy exists $\implies$
Mult-message secure encryption exists.
## Public key cryptography
### Public key cryptography
1970s.
The goal was to agree/share a key without meeting in advance
### Diffie-Helmann Key exchange
#### Diffie-Helmann Key exchange
A and B create a secret key together without meeting.
@@ -75,7 +75,7 @@ And Alice do $(g^b)^a$ where Bob do $(g^a)^b$.
With $g^a,g^b$ no one can compute $g^{ab}$.
### Public key encryption scheme
#### Public key encryption scheme
Ideas: The recipient Bob distributes opened Bob-locks
@@ -90,12 +90,12 @@ Public-key encryption scheme:
Let $A, E$ knows $pk$ not $sk$ and $B$ knows $pk,sk$.
Adversary can now encypt any message $m$ with the public key.
Adversary can now encrypt any message $m$ with the public key.
- Perfect secrecy impossible
- Randomness necessary
Security of public key
#### Security of public key
$\forall n.u.p.p.t D,\exists \epsilon(n)$ such that $\forall n,m_0,m_1\in \{0,1\}^n$
@@ -113,7 +113,9 @@ We will achieve security in sending a single bit $0,1$
Time for trapdoor permutation. (EX. RSA)
Encryption Scheme: Given family of trapdoor permutation $\{f_i\}$ with hardcore bit $h(i)$
#### Encryption Scheme via Trapdoor Permutation
Given family of trapdoor permutation $\{f_i\}$ with hardcore bit $h(i)$
$Gen(1^n):(f_i,f_i^{-1})$, where $f_i^{-1}$ uses trapdoor permutation of $t$

View File

@@ -1,6 +1,6 @@
# Lecture 17
## Strength through Truth
## Chapter 3: Indistinguishability and Pseudorandomness
### Public key encryption scheme (1-bit)
@@ -90,7 +90,7 @@ $Dec_{sk}:r_k=f_i^{-1}(y_k),h_i(r_k)\oplus c_k=m_k$
### Special public key cryptosystem: El-Gamal (based on Diffie-Hellman Assumption)
#### Definition: Decisional Diffie-Hellman Assumption (DDH)
#### Definition 105.1 Decisional Diffie-Hellman Assumption (DDH)
> Define the group of squares mod $p$ as follows:
>
@@ -104,7 +104,7 @@ $\{p\gets \tilde{\Pi_n};y\gets Gen_q;a,b\gets \mathbb{Z}_q:(p,y,y^a,y^b,y^{ab})\
$\{p\gets \tilde{\Pi_n};y\gets Gen_q;a,b,\bold{z}\gets \mathbb{Z}_q:(p,y,y^a,y^b,y^\bold{z})\}_n$
> Diffie-Hellman Assumption:
> (Computational) Diffie-Hellman Assumption:
>
> Hard to compute $y^{ab}$ given $p,y,y^a,y^b$.

View File

@@ -103,7 +103,7 @@ $$
#### Security of Digital Signature
$$
\Pr[(pk,sk)\gets Gen(1^k); (m, \sigma)\gets\mathcal{A}^{Sign_{sk}(\cdot)}(1^k);\mathcal{A}\textup{ did not query }m \textup{ and } Ver_{pk}(m, \sigma)=\textup{``Accept''}]<\epsilon(n)
P[(pk,sk)\gets Gen(1^k); (m, \sigma)\gets\mathcal{A}^{Sign_{sk}(\cdot)}(1^k);\mathcal{A}\textup{ did not query }m \textup{ and } Ver_{pk}(m, \sigma)=\textup{``Accept''}]<\epsilon(n)
$$
For all n.u.p.p.t. adversary $\mathcal{A}$ with oracle access to $Sign_{sk}(\cdot)$.

View File

@@ -2,13 +2,13 @@
## Probability review
Sample space $S=$ set of outcomes (possible results of experiments)
Sample space $S=\text{set of outcomes (possible results of experiments)}$
Event $A\subseteq S$
$P[A]=P[$ outcome $x\in A]$
$P[\{x\}]=P(x)$
$P[\{x\}]=P[x]$
Conditional probability:
@@ -32,27 +32,27 @@ $A=\bigcup_{i=1}^n A\cap B_i$ ($A\cap B_i$ are all disjoint)
$P[A]=\sum^n_{i=1} P[A|B_i]\cdot P[B_i]$
## Back to cryptography
## Chapter 1: Introduction
Defining security.
### Defining security
### Perfect Secrecy (Shannon Secrecy)
#### Perfect Secrecy (Shannon Secrecy)
$K\gets Gen()$ $K\in\mathcal{K}$
$k\gets Gen()$ $k\in K$
$c\gets Enc_K(m)$ or we can also write as $c\gets Enc(K,m)$ for $m\in \mathcal{M}$
$c\gets Enc_k(m)$ or we can also write as $c\gets Enc(k,m)$ for $m\in M$
And the decryption procedure:
$m'\gets Dec_K(c')$, $m'$ might be null.
$m'\gets Dec_k(c')$, $m'$ might be null.
$P[K\gets Gen(): Dec_K(Enc_K(m))=m]=1$
$P[k\gets Gen(): Dec_k(Enc_k(m))=m]=1$
#### Shannon Secrecy
#### Definition 11.1 (Shannon Secrecy)
Distribution $D$ over the message space $\mathcal{M}$
Distribution $D$ over the message space $M$
$P[K\gets Gen;m\gets D: m=m'|c\gets Enc_K(m)]=P[m\gets D: m=m']$
$P[k\gets Gen;m\gets D: m=m'|c\gets Enc_k(m)]=P[m\gets D: m=m']$
Basically, we cannot gain any information from the encoded message.
@@ -60,15 +60,15 @@ Code shall not contain any information changing the distribution of expectation
**NO INFO GAINED**
#### Perfect Secrecy
#### Definition 11.2 (Perfect Secrecy)
For any 2 messages, say $m_1,m_2\in \mathcal{M}$ and for any possible cipher $c$,
For any 2 messages, say $m_1,m_2\in M$ and for any possible cipher $c$,
$P[K\gets Gen:c\gets Enc_K(m_1)]=P[K\gets Gen():c\gets Enc_K(m_2)]$
$P[k\gets Gen:c\gets Enc_k(m_1)]=P[k\gets Gen():c\gets Enc_k(m_2)]$
For a fixed $c$, any message could be encrypted to that...
For a fixed $c$, any message (have a equal probability) could be encrypted to that...
#### Theorem
#### Theorem 12.3
Shannon secrecy is equivalent to perfect secrecy.
@@ -76,22 +76,22 @@ Proof:
If a crypto-system satisfy perfect secrecy, then it also satisfy Shannon secrecy.
Let $(Gen, Enc,Dec)$ be a perfectly secret crypto-system with $\mathcal{K}$ and $\mathcal{M}$.
Let $(Gen,Enc,Dec)$ be a perfectly secret crypto-system with $K$ and $M$.
Let $D$ be any distribution over messages.
Let $m'\in \mathcal{M}$.
Let $m'\in M$.
$$
={P_K[c\gets Enc_K(m')]\cdot P[m=m']\over P_{K,m}[c\gets Enc_K(m)]}\\
={P_k[c\gets Enc_k(m')]\cdot P[m=m']\over P_{k,m}[c\gets Enc_k(m)]}\\
$$
$$
P[K\gets Gen();m\gets D:m=m'|c\gets Enc_K(m)]={P_{K,m}[c\gets Enc_K(m)\vert m=m']\cdot P[m=m']\over P_{K,m}[c\gets Enc_K(m)]}\\
P_{K,m}[c\gets Enc_K(m)]=\sum^n_{i=1}P_{K,m}[c\gets Enc_k(m)|m=m_i]\cdot P[m=m_i]\\
P[k\gets Gen();m\gets D:m=m'|c\gets Enc_k(m)]={P_{k,m}[c\gets Enc_k(m)\vert m=m']\cdot P[m=m']\over P_{k,m}[c\gets Enc_k(m)]}\\
P_{k,m}[c\gets Enc_k(m)]=\sum^n_{i=1}P_{k,m}[c\gets Enc_k(m)|m=m_i]\cdot P[m=m_i]\\
=\sum^n_{i=1}P_{K,m_i}[c\gets Enc_k(m_i)]\cdot P[m=m_i]
$$
and $P_{K,m_i}[c\gets Enc_K(m_i)]$ is constant due to perfect secrecy
and $P_{k,m_i}[c\gets Enc_k(m_i)]$ is constant due to perfect secrecy
$\sum^n_{i=1}P_{K,m_i}[c\gets Enc_K(m_i)]\cdot P[m=m_i]=\sum^n_{i=1} P[m=m_i]=1$
$\sum^n_{i=1}P_{k,m_i}[c\gets Enc_k(m_i)]\cdot P[m=m_i]=\sum^n_{i=1} P[m=m_i]=1$

View File

@@ -1,6 +1,8 @@
# Lecture 20
## Construction of CRHF (Collision Resistant Hash Function)
## Chapter 5: Authentication
### Construction of CRHF (Collision Resistant Hash Function)
Let $h: \{0, 1\}^{n+1} \to \{0, 1\}^n$ be a CRHF.
@@ -119,7 +121,7 @@ Case 2: $h_i(m_1)\neq h_i(m_2)$, Then $\mathcal{A}$ produced valid signature on
EOP
## Many-time Secure Digital Signature
### Many-time Secure Digital Signature
Using one-time secure digital signature scheme on $\{0,1\}^*$ to construct many-time secure digital signature scheme on $\{0,1\}^*$.

View File

@@ -1,6 +1,6 @@
# Lecture 21
## Authentication
## Chapter 5: Authentication
### Digital Signature Scheme

View File

@@ -1,6 +1,6 @@
# Lecture 22
## Chapter 7: Types of Attacks
## Chapter 7: Composability
So far we've sought security against

View File

@@ -1,12 +1,14 @@
# Lecture 23
## Zero-knowledge proofs
## Chapter 7: Composability
### Zero-knowledge proofs
Let the Prover Peggy and the Verifier Victor.
Peggy wants to prove to Victor that she knows a secret $x$ without revealing anything about $x$. (e.g. $x$ such that $g^x=y\mod p$)
### Zero-knowledge proofs protocol
#### Zero-knowledge proofs protocol
The protocol should satisfy the following properties:

View File

@@ -1,6 +1,8 @@
# Lecture 24
## Continue on zero-knowledge proof
## Chapter 7: Composability
### Continue on zero-knowledge proof
Let $X=(G_0,G_1)$ and $y=\sigma$ permutation. $\sigma(G_0)=G_1$.

View File

@@ -4,7 +4,9 @@ All algorithms $C(x)\to y$, $x,y\in \{0,1\}^*$
P.P.T= Probabilistic Polynomial-time Turing Machine.
## Turing Machine: Mathematical model for a computer program
## Chapter 2: Computational Hardness
### Turing Machine: Mathematical model for a computer program
A machine that can:
@@ -16,7 +18,7 @@ A machine that can:
Anything can be accomplished by a real computer program can be accomplished by a "sufficiently complicated" Turing Machine (TM).
## Polynomial time
### Polynomial time
We say $C(x),|x|=n,n\to \infty$ runs in polynomial time if it uses at most $T(n)$ operations bounded by some polynomials. $\exist c>0$ such that $T(n)=O(n^c)$
@@ -28,29 +30,28 @@ $p(n)+q(n),p(n)q(n),p(q(n))$ are polynomial of $n$.
Polynomial-time $\approx$ "efficient" for this course.
## Probabilistic
### Probabilistic
Our algorithm's have access to random "coin-flips" we can produce poly(n) random bits.
$P[C(x)$ takes at most $T(n)$ steps $]=1$
$P[C(x)\text{ takes at most }T(n)\text{ steps }]=1$
Our adversary $a(x)$ will be a P.P.T which is non-uniform (n.u.) (programs description size can grow polynomially in n)
## Efficient private key encryption scheme
### Efficient private key encryption scheme
$m=\{0,1\}^n$
#### Definition 3.2 (Efficient private key encryption scheme)
$Gen(1^n)$ p.p.t output $k\in \mathcal{K}$
The triple $(Gen,Enc,Dec)$ is an efficient private key encryption scheme over the message space $M$ and key space $K$ if:
$Enc_k(m)$ p.p.t outputs $c$
1. $Gen(1^n)$ is a randomized p.p.t that outputs $k\in K$
2. $Enc_k(m)$ is a potentially randomized p.p.t that outputs $c$ given $m\in M$
3. $Dec_k(c')$ is a deterministic p.p.t that outputs $m$ or "null"
4. $P_k[Dec_k(Enc_k(m))=m]=1,\forall m\in M$
$Dec_k(c')$ p.p.t outputs $m$ or "null"
### Negligible function
$P_k[Dec_k(Enc_k(m))=m]=1$
## Negligible function
$\epsilon:\mathbb{N}\to \mathbb{R}$ is a negligible function if $\forall c>0$, $\exists N\in\mathbb{N}$ such that $\forall n\geq N, \epsilon(n)<\frac{1}{n^c}$
$\epsilon:\mathbb{N}\to \mathbb{R}$ is a negligible function if $\forall c>0$, $\exists N\in\mathbb{N}$ such that $\forall n\geq N, \epsilon(n)<\frac{1}{n^c}$ (looks like definition of limits huh) (Definition 27.2)
Idea: for any polynomial, even $n^{100}$, in the long run $\epsilon(n)\leq \frac{1}{n^{100}}$
@@ -58,7 +59,7 @@ Example: $\epsilon (n)=\frac{1}{2^n}$, $\epsilon (n)=\frac{1}{n^{\log (n)}}$
Non-example: $\epsilon (n)=O(\frac{1}{n^c})\forall c$
## One-way function
### One-way function
Idea: We are always okay with our chance of failure being negligible.
@@ -66,21 +67,19 @@ Foundational concept of cryptography
Goal: making $Enc_k(m),Dec_k(c')$ easy and $Dec^{-1}(c')$ hard.
### Strong one-way function
#### Definition: Strong one-way function
#### Definition 27.3 (Strong one-way function)
$$
f:\{0,1\}^n\to \{0,1\}^*(n\to \infty)
$$
There is a negligible function $\epsilon (n)$ such that for any adversary $a$ (n.u.p.p.t)
There is a negligible function $\epsilon (n)$ such that for any adversary $\mathcal{A}$ (n.u.p.p.t)
$$
P[x\gets\{0,1\}^n;y=f(x):f(a(y))=y,a(y)=x']\leq\epsilon(n)
P[x\gets\{0,1\}^n;y=f(x):f(\mathcal{A}(y))=y]\leq\epsilon(n)
$$
_Probability of guessing correct message is negligible_
_Probability of guessing a message $x'$ with the same output as the correct message $x$ is negligible_
and
@@ -95,11 +94,11 @@ Example: Suppose $f$ is one-to-one, then $a$ must find our $x$, $P[x'=x]=\frac{1
Why do we allow $a$ to get a different $x'$?
> Suppose the definition is $P[x\gets\{0,1\}^n;y=f(x):a(y)=x]\neq\epsilon(n)$, then a trivial function $f(x)=x$ would also satisfy the definition.
> Suppose the definition is $P[x\gets\{0,1\}^n;y=f(x):\mathcal{A}(y)=x]\neq\epsilon(n)$, then a trivial function $f(x)=x$ would also satisfy the definition.
To be technically fair, $a(y)=a(y,1^n)$, size of input $\approx n$, let them use $poly(n)$ operations.
To be technically fair, $\mathcal{A}(y)=\mathcal{A}(y,1^n)$, size of input $\approx n$, let them use $poly(n)$ operations. (we also tells the input size is $n$ to $\mathcal{A}$)
### Do one-way function exists?
#### Do one-way function exists?
Unknown, actually...
@@ -107,7 +106,9 @@ But we think so!
We will need to use various assumptions. one that we believe very strongly based on evidence/experience
Ex. $p,q$ are large random primes
Example:
$p,q$ are large random primes
$N=p\cdot q$

View File

@@ -4,52 +4,59 @@
Negligible function $\epsilon(n)$ if $\forall c>0,\exist N$ such that $n>N$, $\epsilon (n)<\frac{1}{n^c}$
Ex: $\epsilon(n)=2^{-n},\epsilon(n)=\frac{1}{n^{\log (\log n)}}$
Example:
### Strong One-Way Function
$\epsilon(n)=2^{-n},\epsilon(n)=\frac{1}{n^{\log (\log n)}}$
## Chapter 2: Computational Hardness
### One-way function
#### Strong One-Way Function
1. $\exists$ a P.P.T. that computes $f(x),\forall x\in\{0,1\}^n$
2. $\forall a$ adversaries, $\exists \epsilon(n),\forall n$.
2. $\forall \mathcal{A}$ adversaries, $\exists \epsilon(n),\forall n$.
$$
P[x\gets \{0,1\}^n;y=f(x):f(a(y,1^n))=y]<\epsilon(n)
P[x\gets \{0,1\}^n;y=f(x):f(\mathcal{A}(y,1^n))=y]<\epsilon(n)
$$
_That is, the probability of success guessing should decreasing as encrypted message increase..._
_That is, the probability of success guessing should decreasing (exponentially) as encrypted message increase (linearly)..._
To negate statement 2:
$$
P[x\gets \{0,1\}^n;y=f(x):f(a(y,1^n))=y]=\mu_a(n)
P[x\gets \{0,1\}^n;y=f(x):f(\mathcal{A}(y,1^n))=y]=\mu(n)
$$
is a negligible function.
Negation:
$\exists a$, $P[x\gets \{0,1\}^n;y=f(x):f(a(y,1^n))=y]=\mu_a(n)$ is not a negligible function.
$\exists \mathcal{A}$, $P[x\gets \{0,1\}^n;y=f(x):f(\mathcal{A}(y,1^n))=y]=\mu(n)$ is not a negligible function.
That is, $\exists c>0,\forall N \exists n>N \epsilon(n)>\frac{1}{n^c}$
$\mu_a(n)>\frac{1}{n^c}$ for infinitely many $n$. or infinitely often.
$\mu(n)>\frac{1}{n^c}$ for infinitely many $n$. or infinitely often.
> Keep in mind: $P[success]=\frac{1}{n^c}$, it can try $O(n^c)$ times and have a good chance of succeeding at least once.
## New materials
### Weak one-way function
#### Definition 28.4 (Weak one-way function)
$f:\{0,1\}^n\to \{0,1\}^*$
1. $\exists$ a P.P.T. that computes $f(x),\forall x\in\{0,1\}^n$
2. $\forall a$ adversaries, $\exists \epsilon(n),\forall n$.
2. $\forall \mathcal{A}$ adversaries, $\exists \epsilon(n),\forall n$.
$$
P[x\gets \{0,1\}^n;y=f(x):f(a(y,1^n))=y]<1-\frac{1}{p(n)}
P[x\gets \{0,1\}^n;y=f(x):f(\mathcal{A}(y,1^n))=y]<1-\frac{1}{p(n)}
$$
_The probability of success should not be too close to 1_
### Probability
### Useful bound $0<p<1$
#### Useful bound $0<p<1$
$1-p<e^{-p}$
@@ -59,9 +66,11 @@ For an experiment has probability $p$ of failure and $1-p$ of success.
We run experiment $n$ times independently.
$P[$success all n times$]=(1-p)^n<(e^{-p})^n=e^{-np}$
$P[\text{success all n times}]=(1-p)^n<(e^{-p})^n=e^{-np}$
Theorem: If there exists a weak one-way function, there there exists a strong one-way function
#### Theorem 35.1 (Strong one-way function from weak one-way function)
If there exists a weak one-way function, there there exists a strong one-way function
In particular, if $f:\{0,1\}^n\to \{0,1\}^*$ is weak one-way function.
@@ -99,14 +108,16 @@ Example: $(1-\frac{1}{n^2})^{n^3}<e^{-n}$
#### Multiplication
$Mult(m_1,m_2)=\begin{cases}
$$
Mult(m_1,m_2)=\begin{cases}
1,m_1=1 | m_2=1\\
m_1\cdot m_2
\end{cases}$
\end{cases}
$$
But we don't want trivial answers like (1,1000000007)
Idea: Our "secret" is 373 and 481, Eve cna see the product 179413.
Idea: Our "secret" is 373 and 481, Eve can see the product 179413.
Not strong one-way for all integer inputs because there are trivial answer for $\frac{3}{4}$ of all outputs. `Mult(2,y/2)`
@@ -126,4 +137,4 @@ $$
P[p_1\gets \Pi_n;p_2\gets \Pi_n;N=p_1\cdot p_2:a(n)=\{p_1,p_2\}]<\epsilon(n)
$$
where $\Pi_n=\{$ all primes $p<2^n\}$
where $\Pi_n=\{p\text{ all primes }p<2^n\}$

View File

@@ -1,11 +1,13 @@
# Lecture 5
## Chapter 2: Computational Hardness
Proving that there are one-way functions relies on assumptions.
Factoring Assumption: $\forall a, \exist \epsilon (n)$, let $p,q\in prime,p,q<2^n$
Factoring Assumption: $\forall \mathcal{A}, \exist \epsilon (n)$, let $p,q\in \Pi_n,p,q<2^n$
$$
P[p\gets \Pi_n;q\gets \Pi_n;N=p\cdot q:a(N)\in \{p,q\}]<\epsilon(n)
P[p\gets \Pi_n;q\gets \Pi_n;N=p\cdot q:\mathcal{A}(N)\in \{p,q\}]<\epsilon(n)
$$
Evidence: To this point, best known procedure to always factor has run time $O(2^{\sqrt{n}\sqrt{log(n)}})$
@@ -33,40 +35,40 @@ $$
Idea: There are enough pairs of primes to make this difficult.
> Reminder: Weak on-way if easy to compute and $\exist p(n)$,
> $$P[a\ inverts=success]<1-\frac{1}{p(n)}$$
> $$P[failure]>\frac{1}{p(n)}$$ high enough
> $P[\mathcal{A}\ \text{inverts=success}]<1-\frac{1}{p(n)}$
> $P[\mathcal{A}\ \text{inverts=failure}]>\frac{1}{p(n)}$ high enough
## Prove one-way function (under assumptions)
### Prove one-way function (under assumptions)
To prove $f$ is on-way (under assumption)
1. Show $\exists p.p.t$ solves $f(x),\forall x$.
2. Proof by contradiction.
- For weak: Provide $p(n)$ that we know works.
- Assume $\exists a$ such that $P[a\ inverts]>1-\frac{1}{p(n)}$
- Assume $\exists \mathcal{A}$ such that $P[\mathcal{A}\ \text{inverts}]>1-\frac{1}{p(n)}$
- For strong: Provide $p(n)$ that we know works.
- Assume $\exists a$ such that $P[a\ inverts]>\frac{1}{p(n)}$
- Assume $\exists \mathcal{A}$ such that $P[\mathcal{A}\ \text{inverts}]>\frac{1}{p(n)}$
Construct p.p.t B
which uses $a$ to solve a problem, which contradicts assumption or known fact.
Construct p.p.t $\mathcal{B}$
which uses $\mathcal{A}$ to solve a problem, which contradicts assumption or known fact.
Back to Theorem:
We will show that $p(n)=8n^2$ works.
We claim $\forall a$,
We claim $\forall \mathcal{A}$,
$$
P[(x_1,x_2)\gets \{0,1\}^{2n};y=f_{mult}(x_1,x_2):f(a(y))=y]<1-\frac{1}{8n^2}
P[(x_1,x_2)\gets \{0,1\}^{2n};y=f_{mult}(x_1,x_2):f(\mathcal{A}(y))=y]<1-\frac{1}{8n^2}
$$
For the sake of contradiction, suppose
$$
\exists a \textup{ such that} P[success]>1-\frac{1}{8n^2}
\exists \mathcal{A} \textup{ such that} P[\mathcal{A}\ \text{inverts}]>1-\frac{1}{8n^2}
$$
We will use this $a$ to design p.p.t $B$ which can factor 2 random primes with non-negligible prob.
We will use this $\mathcal{A}$ to design p.p.t $B$ which can factor 2 random primes with non-negligible prob.
```python
def A(y):
@@ -88,27 +90,27 @@ def B(y):
return A(y)
```
How often does B succeed/fail?
How often does $\mathcal{B}$ succeed/fail?
B fails to factor $N=p\dot q$, if:
$\mathcal{B}$ fails to factor $N=p\dot q$, if:
- $x$ and $y$ are not both prime
- $P_e=1-P(x\in prime)P(y\in prime)\leq 1-(\frac{1}{2n})^2=1-\frac{1}{4n^2}$
- if $a$ fails to factor
- $P_e=1-P(x\in \Pi_n)P(y\in \Pi_n)\leq 1-(\frac{1}{2n})^2=1-\frac{1}{4n^2}$
- if $\mathcal{A}$ fails to factor
- $P_f<\frac{1}{8n^2}$
So
$$
P[B\ fails]\leq P[E\cup F]\leq P[E]+P[F]\leq (1-\frac{1}{4n^2}+\frac{1}{8n^2})=1-\frac{1}{8n^2}
P[\mathcal{B} \text{ fails}]\leq P[E\cup F]\leq P[E]+P[F]\leq (1-\frac{1}{4n^2}+\frac{1}{8n^2})=1-\frac{1}{8n^2}
$$
So
$$
P[B\ succeed]\geq \frac{1}{8n^2}\ (non\ negligible)
P[\mathcal{B} \text{ succeed}]\geq \frac{1}{8n^2} (\text{non-negligible})
$$
This contradicting factoring assumption. Therefore, our assumption that $a$ exists was wrong.
This contradicting factoring assumption. Therefore, our assumption that $\mathcal{A}$ exists was wrong.
Therefore $\forall a$, $P[(x_1,x_2)\gets \{0,1\}^{2n};y=f_{mult}(x_1,x_2):f(a(y))=y]<1-\frac{1}{8n^2}$ is wrong.
Therefore $\forall \mathcal{A}$, $P[(x_1,x_2)\gets \{0,1\}^{2n};y=f_{mult}(x_1,x_2):f(\mathcal{A}(y))=y]<1-\frac{1}{8n^2}$ is wrong.

View File

@@ -8,9 +8,11 @@ $$
is a weak one-way.
$P[a\ invert]\leq 1-\frac{1}{8n^2}$ over $x,y\in$ random integers $\{0,1\}^n$
$P[\mathcal{A}\ \text{invert}]\leq 1-\frac{1}{8n^2}$ over $x,y\in$ random integers $\{0,1\}^n$
## Converting to strong one-way function
## Chapter 2: Computational Hardness
### Converting weak one-way function to strong one-way function
By factoring assumptions, $\exists$ strong one-way function
@@ -22,7 +24,7 @@ $f:\{0,1\}^{8n^4}\to \{0,1\}^{8n^4}$
Idea: With high probability, at least one pair $(x_i,y_i)$ are both prime.
Factoring assumption: $a$ has low chance of factoring $f_{mult}(x_i,y_i)$
Factoring assumption: $\mathcal{A}$ has low chance of factoring $f_{mult}(x_i,y_i)$
Use $P[x \textup{ is prime}]\geq\frac{1}{2n}$
@@ -34,13 +36,13 @@ $$
P[\forall p,q \in x_i,y_i, p\textup{ and } q \textup{ is not prime }]\leq(1-\frac{1}{4n^2})^{4n^3}\leq (e^{-\frac{1}{4n^2}})^{4n^3}=e^{-n}
$$
### Proof of strong one-way
### Proof of strong one-way function
1. $f_{mult}$ is efficiently computable, and we compute it poly-many times.
2. Suppose it's not hard to invert. Then
$\exists n.u.p.p.t.\ a$such that $P[w\gets \{0,1\}^{8n^4};z=f(w):f(a(z))=0]=\mu (n)>\frac{1}{p(n)}$
$\exists \text{n.u.p.p.t.}\ \mathcal{A}$such that $P[w\gets \{0,1\}^{8n^4};z=f(w):f(\mathcal{A}(z))=0]=\mu (n)>\frac{1}{p(n)}$
We will use this to construct $B$ that breaks factoring assumption.
We will use this to construct $\mathcal{B}$ that breaks factoring assumption.
$p\gets \Pi_n,q\gets \Pi_n,N=p\cdot q$
@@ -64,11 +66,11 @@ function B:
Let $E$ be the event that all pairs of sampled integers were not both prime.
Let $F$ be the event that $a$ failed to invert
Let $F$ be the event that $\mathcal{A}$ failed to invert
$P(B\ fails)\leq P[E\cup F]\leq P[E]+P[F]\leq e^{-n}+(1-\frac{1}{p(n)})=1-(\frac{1}{p(n)}-e^{-n})\leq 1-\frac{1}{2p(n)}$
$P[\mathcal{B} \text{ fails}]\leq P[E\cup F]\leq P[E]+P[F]\leq e^{-n}+(1-\frac{1}{p(n)})=1-(\frac{1}{p(n)}-e^{-n})\leq 1-\frac{1}{2p(n)}$
$P[B\ succeeds]=P[p\gets \Pi_n,q\gets \Pi_n,N=p\cdot q:B(N)\in \{p,q\}]\geq \frac{1}{2p(n)}$
$P[\mathcal{B} \text{ succeeds}]=P[p\gets \Pi_n,q\gets \Pi_n,N=p\cdot q:\mathcal{B}(N)\in \{p,q\}]\geq \frac{1}{2p(n)}$
Contradicting factoring assumption
@@ -87,10 +89,10 @@ $F=\{f_i:D_i\to R_i\},i\in I$, $I$ is the index set.
1. We can effectively choose $i\gets I$ using $Gen$.
2. $\forall i$ we ca efficiently sample $x\gets D_i$.
3. $\forall i\forall x\in D_i,f_i(x)$ is efficiently computable
4. For any n.u.p.p.t $a$, $\exists$ negligible function $\epsilon (n)$.
$P[i\gets Gen(1^n);x\gets D_i;y=f_i(x):f(a(y,i,1^n))=y]\leq \epsilon(n)$
4. For any n.u.p.p.t $\mathcal{A}$, $\exists$ negligible function $\epsilon (n)$.
$P[i\gets Gen(1^n);x\gets D_i;y=f_i(x):f(\mathcal{A}(y,i,1^n))=y]\leq \epsilon(n)$
#### Theorem
#### An instance of strong one-way function under factoring assumption
$f_{mult,n}:(\Pi_n\times \Pi_n)\to \{0,1\}^{2n}$ is a collection of strong one way function.
@@ -107,8 +109,6 @@ Algorithm for sampling a random prime $p\gets \Pi_n$
- Deterministic poly-time procedure
- In practice, a much faster randomized procedure (Miller-Rabin) used
$P[x\cancel{\in} prime|test\ said\ x\ prime]<\epsilon(n)$
$P[x\cancel{\in} \text{prime}|\text{test said x prime}]<\epsilon(n)$
3. If not, repeat. Do this for polynomial number of times
> $;$ means and, $:$ means given that. $1$ usually interchangable with $\{0,1\}^n$

View File

@@ -1,10 +1,12 @@
# Lecture 7
## Letter choosing experiment
## Chapter 2: Computational Hardness
### Letter choosing experiment
For 100 letter tiles,
$p_1,...,p_{27}$ (with oe blank)
$p_1,...,p_{27}$ (with one blank)
$(p_1)^2+\dots +(p_{27})^2\geq\frac{1}{27}$
@@ -12,17 +14,17 @@ For any $p_1,...,p_n$, $0\leq p_i\leq 1$.
$\sum p_i=1$
$P[$the same event twice in a row$]=p_1^2+p_2^2....+p_n^2$
$P[\text{the same event twice in a row}]=p_1^2+p_2^2....+p_n^2$
By Cauchy-Schwarz: $|u\cdot v|^2 \leq ||u||\cdot ||v||^2$.
let $\vec{u}=(p_1,...,p_n)$, $\vec{v}=(1,..,1)$, so $(p_1^2+p_2^2....+p_n)^2\leq (p_1^2+p_2^2....+p_n^2)\cdot n$. So $p_1^2+p_2^2....+p_n^2\geq \frac{1}{n}$
So for an adversary $A$, who random choose $x'$ and output $f(x')=f(x)$ if matched. $P[f(x)=f(x')]\geq\frac{1}{|Y|}$
So for an adversary $\mathcal{A}$, who random choose $x'$ and output $f(x')=f(x)$ if matched. $P[f(x)=f(x')]\geq\frac{1}{|Y|}$
So $P[x\gets f(x);y=f(x):f(a(y,1^n))=y]\geq \frac{1}{|Y|}$
So $P[x\gets f(x);y=f(x):\mathcal{A}(y,1^n)=y]\geq \frac{1}{|Y|}$
## Modular arithmetic
### Modular arithmetic
For $a,b\in \mathbb{Z}$, $N\in \mathbb{Z}^2$
@@ -30,7 +32,7 @@ $a\equiv b \mod N\iff N|(a-b)\iff \exists k\in \mathbb{Z}, a-b=kN,a=kN+b$
Ex: $N=23$, $-20\equiv 3\equiv 26\equiv 49\equiv 72\mod 23$.
### Equivalent relations for any $N$ on $\mathbb{Z}$
#### Equivalent relations for any $N$ on $\mathbb{Z}$
$a\equiv a\mod N$
@@ -38,7 +40,7 @@ $a\equiv b\mod N\iff b\equiv a\mod N$
$a\equiv b\mod N$ and $b\equiv c\mod N\implies a\equiv c\mod N$
### Division Theorem
#### Division Theorem
For any $a\in \mathbb{Z}$, and $N\in\mathbb{Z}^+$, $\exists unique\ r,0\leq r<N$.
@@ -52,7 +54,7 @@ Definition: $gcd(a,b)=d,a,b\in \mathbb{Z}^+$, is the maximum number such that $d
Using normal factoring is slow... (Example: large $p,q,r$, $N=p\cdot q,,M=p\cdot r$)
#### Euclidean algorithm.
##### Euclidean algorithm
Recursively relying on fact that $(a>b>0)$
@@ -82,3 +84,37 @@ Proof:
Since $a_i=q_i\cdot b_i+b_{i+1}$, and $b_1=q_2\cdot b_2+b_3$, $b_2>b_3$, and $q_2$ in worst case is $1$, so $b_3<\frac{b_1}{2}$
$T(n)=2\Theta(\log b)=O(\log n)$ (linear in size of bits input)
##### Extended Euclidean algorithm
Our goal is to find $x,y$ such that $ax+by=gcd(a,b)$
Given $a\cdot x\equiv b\mod N$, we do euclidean algorithm to find $gcd(a,b)=d$, then reverse the steps to find $x,y$ such that $ax+by=d$
```python
def extended_euclidean_algorithm(a,b):
if a%b==0: return (0,1)
x,y=extended_euclidean_algorithm(b,a%b)
return (y,x-y*(a//b))
```
Example: $a=12,b=43$, $gcd(12,43)=1$
$$
\begin{aligned}
43&=3\cdot 12+7\\
12&=1\cdot 7+5\\
7&=1\cdot 5+2\\
5&=2\cdot 2+1\\
2&=2\cdot 1+0\\
1&=1\cdot 5-2\cdot 2\\
1&=1\cdot 5-2\cdot (7-1\cdot 5)\\
1&=3\cdot 5-2\cdot 7\\
1&=3\cdot (12-1\cdot 7)-2\cdot 7\\
1&=3\cdot 12-5\cdot 7\\
1&=3\cdot 12-5\cdot (43-3\cdot 12)\\
1&=-5\cdot 43+18\cdot 12\\
\end{aligned}
$$
So $x=-5,y=18$

View File

@@ -1,6 +1,8 @@
# Lecture 8
## Computational number theory/arithmetic
## Chapter 2: Computational Hardness
### Computational number theory/arithmetic
We want to have a easy-to-use one-way functions for cryptography.
@@ -29,14 +31,14 @@ _looks like fast exponentiation right?_
Goal: $f_{g,p}(x)=g^x\mod p$ is a one-way function, for certain choice of $p,g$ (and assumptions)
### A group (Nice day one for MODERN ALGEBRA)
#### A group (Nice day one for MODERN ALGEBRA)
A group $G$ is a set with, a binary operation $\oplus$. and $\forall a,b\in G$, $a \oplus b\to c$
1. $a,b\in G,a\oplus b\in G$
2. $(a\oplus b)\oplus c=a\oplus(b\oplus c)$
3. $\exists e$ such that $\forall a\in G, e\oplus g=g=g\oplus e$
4. $\exists g^{-1}\in G$ such that $g\oplus g^{-1}=e$
1. $a,b\in G,a\oplus b\in G$ (closure)
2. $(a\oplus b)\oplus c=a\oplus(b\oplus c)$ (associativity)
3. $\exists e$ such that $\forall a\in G, e\oplus g=g=g\oplus e$ (identity element)
4. $\exists g^{-1}\in G$ such that $g\oplus g^{-1}=e$ (inverse element)
Example:
@@ -49,13 +51,13 @@ Example:
- Let $a\in \mathbb{Z}_N^*$, by Euclidean algorithm, $gcd(a,N)=1$,$\exists x,y \in Z$ such that $ax+Ny=1,ax\equiv 1\mod N,x=a^{-1}$
- $a,b\in \mathbb{Z}_N^*$. Want to show $gcd(ab,N)=1$. If $gcd(ab,N)=d>1$, then some prime $p|d$. so $p|(a,b)$, which means $p|a$ or $p|b$. In either case, $gcd(a,N)>d$ or $gcd(b,N)>d$, which contradicts that $a,b\in \mathbb{C}_N^*$
### Euler's totient function
#### Euler's totient function
$\phi:\mathbb{Z}^+\to \mathbb{Z}^+,\phi(N)=|\mathbb{Z}_N^*|=|\{1\leq x\leq N:gcd(x,N)=1\}|$
Example: $\phi(1)=1$, $\phi(24)=8$, $\phi (p)=p-1$, $\phi(p\cdot q)=(p-1)(q-1)$
### Euler's Theorem
#### Euler's Theorem
For any $a\in \mathbb{Z}_N^*$, $a^{\phi(N)}\equiv 1\mod N$

View File

@@ -1,6 +1,8 @@
# Lecture 9
## Continue on Cyclic groups
## Chapter 2: Computational Hardness
### Continue on Cyclic groups
$$
\begin{aligned}
@@ -99,7 +101,7 @@ def get_generator(p):
return g
```
### Diffie-Hellman assumption
### (Computational) Diffie-Hellman assumption
If $p$ is a randomly sampled safe prime.
@@ -114,5 +116,3 @@ $$
$p\gets \tilde{\Pi_n};a\gets\mathbb{Z}_p^*;g=a^2\neq 1$ is the function condition when we do the encryption on cyclic groups.
Notes: $f:\Z_q\to \mathbb{Z}_p^*$ is one-to-one, so $f(\mathcal{A}(y))\iff \mathcal{A}(y)=x$

View File

@@ -32,10 +32,24 @@ Many definitions to remember. They are long and tedious.
For example, I have to read the book to understand the definition of "hybrid argument". It was given as follows:
>Let $X^0_n,X^1_n,\dots,X^m_n$ are ensembles indexed from $1,..,m$
> If $\mathcal{D}$ distinguishes $X_n^0$ and $X_n^m$ by $\mu(n)$, then $\exists i,1\leq i\leq m$ where $X_{n}^{i-1}$ and $X_n^i$ are distinguished by $\mathcal{D}$ by $\frac{\mu(n)}{m}$
I'm having a hard time to recover them without reading the book.
The lecturer's explanation is good but you'd better always pay attention in class or you'll having a hard time to catch up with the proof.
### Notations used in this course
The notations used in this course is very complicated. However, since we need to defined those concepts mathematically, we have to use those notations. Here are some notations I changed or emphasized for better readability at least for myself.
- I changed all the element in set to lowercase letters. I don't know why K is capitalized in the book.
- I changed the message space notation $\mathcal{M}$ to $M$, and key space notation $\mathcal{K}$ to $K$ for better readability.
- All the $\mathcal{A}$ denotes a algorithm. For example, $\mathcal{A}$ is the adversary algorithm, and $\mathcal{D}$ is the distinguisher algorithm.
- As always, $[1,n]$ denotes the set of integers from 1 to n.
- $P[A]$ denotes the probability of event $A$.
- $\{0,1\}^n$ denotes the set of all binary strings of length $n$.
- $1^n$ denotes the string of length $n$ with all bits being 1.
- $0^n$ denotes the string of length $n$ with all bits being 0.
- $;$ means and, $:$ means given that.
- $\Pi_n$ denotes the set of all primes less than $2^n$.