2.0 KiB
Math 401, Paper 1, Side note 1: Quantum information theory and Measure concentration
Typicality
The idea of typicality in high-dimensions is very important topic in understanding this paper and taking it to the next level of detail under language of mathematics. I'm trying to comprehend these material and write down my understanding in this note.
Let X be the alphabet of our source of information.
Let x^n=x_1,x_2,\cdots,x_n be a sequence with x_i\in X.
We say that x^n is $\epsilon$-typical with respect to p(x) if
- For all
a\in Xwithp(a)>0, we have
\|\frac{1}{n}N(a|x^n)-p(a)|\leq \frac{\epsilon}{\|X\|}
- For all
a\in Xwithp(a)=0, we have
N(a|x^n)=0
Here N(a|x^n) is the number of times a appears in x^n. That's basically saying that:
- The difference between the probability of
aappearing in $x^n$ and the probability ofaappearing in the source of information $p(a)$ should be within\epsilondivided by the size of the alphabetXof the source of information. - The probability of
anot appearing inx^nshould be 0.
Here are two easy propositions that can be proved:
For \epsilon>0, the probability of a sequence being $\epsilon$-typical goes to 1 as n goes to infinity.
If x^n is $\epsilon$-typical, then the probability of x^n is produced is 2^{-n[H(X)+\epsilon]}\leq p(x^n)\leq 2^{-n[H(X)-\epsilon]}.
The number of $\epsilon$-typical sequences is at least 2^{n[H(X)+\epsilon]}.
Recall that H(X)=-\sum_{a\in X}p(a)\log_2 p(a) is the entropy of the source of information.
Shannon theory in Quantum information theory
Shannon theory provides a way to quantify the amount of information in a message.
Practically speaking:
- A holy grail for error-correcting codes
- Conceptually speaking:
- An operationally-motivated way of thinking about correlations
- What’s missing (for a quantum mechanic)?
- Features from linear structure:
- Entanglement and non-orthogonality
- Features from linear structure: