updates?
This commit is contained in:
Binary file not shown.
@@ -361,7 +361,24 @@ First, we define the Hilbert space in case one did not make the step from the li
|
|||||||
A Hilbert space is a complete inner product space.
|
A Hilbert space is a complete inner product space.
|
||||||
\end{defn}
|
\end{defn}
|
||||||
|
|
||||||
That is, a vector space equipped with an inner product that is complete (every Cauchy sequence converges to a limit).
|
That is, a vector space equipped with an inner product, with the induced metric defined by the norm of the inner product, we have a metric space, which is complete. Reminds that complete mean that every Cauchy sequence, the sequence such that for any $\epsilon>0$, there exists an $N$ such that for all $m,n\geq N$, we have $|x_m-x_n|<\epsilon$, converges to a limit.
|
||||||
|
|
||||||
|
As a side note we will use later, we also defined the Borel measure on a space, here we use the following definition specialized for the space (manifolds) we are interested in.
|
||||||
|
|
||||||
|
\begin{defn}
|
||||||
|
\label{defn:Borel_measure}
|
||||||
|
Borel measure:
|
||||||
|
|
||||||
|
Let $X$ be a topological space, then a Borel measure $\mu:\mathscr{B}(X)\to [0,\infty]$ on $X$ is a measure on the Borel $\sigma$-algebra of $X$ $\mathscr{B}(X)$ satisfying the following properties:
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item $X \in \mathscr{B}$.
|
||||||
|
\item Close under complement: If $A\subseteq X$, then $\mu(A^c)=\mu(X)-\mu(A)$
|
||||||
|
\item Close under countable unions; If $E_1,E_2,\cdots$ are disjoint sets, then $\mu(\bigcup_{i=1}^\infty E_i)=\sum_{i=1}^\infty \mu(E_i)$
|
||||||
|
\end{enumerate}
|
||||||
|
\end{defn}
|
||||||
|
|
||||||
|
In later sections, we will use Lebesgue measure, and Haar measure for various circumstances, their detailed definition may be introduced in later sections.
|
||||||
|
|
||||||
\begin{examples}
|
\begin{examples}
|
||||||
|
|
||||||
@@ -538,7 +555,7 @@ The counterpart of the random variable in the non-commutative probability theory
|
|||||||
\label{defn:observable}
|
\label{defn:observable}
|
||||||
Observable:
|
Observable:
|
||||||
|
|
||||||
Let $\mathscr{B}(\mathbb{R})$ be the set of all Borel sets on $\mathbb{R}$.
|
Let $\mathcal{B}(\mathbb{R})$ be the set of all Borel sets on $\mathbb{R}$.
|
||||||
|
|
||||||
An (real-valued) observable (random variable) on the Hilbert space $\mathscr{H}$, denoted by $A$, is a projection-valued map (measure) $P_A:\mathscr{B}(\mathbb{R})\to\mathscr{P}(\mathscr{H})$.
|
An (real-valued) observable (random variable) on the Hilbert space $\mathscr{H}$, denoted by $A$, is a projection-valued map (measure) $P_A:\mathscr{B}(\mathbb{R})\to\mathscr{P}(\mathscr{H})$.
|
||||||
|
|
||||||
@@ -721,7 +738,7 @@ $$
|
|||||||
\rho = \ket{\psi}\bra{\psi}.
|
\rho = \ket{\psi}\bra{\psi}.
|
||||||
$$
|
$$
|
||||||
|
|
||||||
The probability that a measurement associated with a PVM $P$ yields an outcome in a Borel set $A$ is
|
The probability that a measurement associated with a PVM $P$ yields an outcome in a Borel set $A\in \mathcal{B}$ is
|
||||||
$$
|
$$
|
||||||
\mathbb P(A) = \operatorname{Tr}(\rho\, P(A)).
|
\mathbb P(A) = \operatorname{Tr}(\rho\, P(A)).
|
||||||
$$
|
$$
|
||||||
@@ -779,7 +796,7 @@ Here is Table~\ref{tab:analog_of_classical_probability_theory_and_non_commutativ
|
|||||||
\hline
|
\hline
|
||||||
Sample space $\Omega$, cardinality $\vert\Omega\vert=n$, example: $\Omega=\{0,1\}$ & Complex Hilbert space $\mathscr{H}$, dimension $\dim\mathscr{H}=n$, example: $\mathscr{H}=\mathbb{C}^2$ \\
|
Sample space $\Omega$, cardinality $\vert\Omega\vert=n$, example: $\Omega=\{0,1\}$ & Complex Hilbert space $\mathscr{H}$, dimension $\dim\mathscr{H}=n$, example: $\mathscr{H}=\mathbb{C}^2$ \\
|
||||||
\hline
|
\hline
|
||||||
Common algebra of $\mathbb{C}$ valued functions & Algebra of bounded operators $\mathscr{B}(\mathscr{H})$ \\
|
Common algebra of $\mathbb{C}$ valued functions & Algebra of bounded operators $\mathcal{B}(\mathscr{H})$ \\
|
||||||
\hline
|
\hline
|
||||||
$f\mapsto \bar{f}$ complex conjugation & $P\mapsto P^*$ adjoint \\
|
$f\mapsto \bar{f}$ complex conjugation & $P\mapsto P^*$ adjoint \\
|
||||||
\hline
|
\hline
|
||||||
@@ -816,7 +833,7 @@ Here is Table~\ref{tab:analog_of_classical_probability_theory_and_non_commutativ
|
|||||||
\vspace{0.5cm}
|
\vspace{0.5cm}
|
||||||
\end{table}
|
\end{table}
|
||||||
|
|
||||||
\subsection{Quantum physics and terminologies}
|
\section{Quantum physics and terminologies}
|
||||||
|
|
||||||
In this section, we will introduce some terminologies and theorems used in quantum physics that are relevant to our study. Assuming no prior knowledge of quantum physics, we will provide brief definitions and explanations for each term.
|
In this section, we will introduce some terminologies and theorems used in quantum physics that are relevant to our study. Assuming no prior knowledge of quantum physics, we will provide brief definitions and explanations for each term.
|
||||||
|
|
||||||
|
|||||||
Binary file not shown.
@@ -11,10 +11,111 @@
|
|||||||
|
|
||||||
In this section, we will explore how the results from Hayden's concentration of measure theorem can be understood in terms of observable diameters from Gromov's perspective and what properties it reveals for entropy functions.
|
In this section, we will explore how the results from Hayden's concentration of measure theorem can be understood in terms of observable diameters from Gromov's perspective and what properties it reveals for entropy functions.
|
||||||
|
|
||||||
|
We will try to use the results from previous sections to estimate the observable diameter for complex projective spaces.
|
||||||
|
|
||||||
\section{Observable diameters}
|
\section{Observable diameters}
|
||||||
|
|
||||||
Recall from previous sections, an arbitrary 1-Lipschitz function $f:S^n\to \mathbb{R}$ concentrates near a single value $a_0\in \mathbb{R}$ as strongly as the distance function does.
|
Recall from previous sections, an arbitrary 1-Lipschitz function $f:S^n\to \mathbb{R}$ concentrates near a single value $a_0\in \mathbb{R}$ as strongly as the distance function does.
|
||||||
|
|
||||||
|
\begin{defn}
|
||||||
|
\label{defn:mm-space}
|
||||||
|
|
||||||
|
Let $X$ be a topological space with the following:
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item $X$ is a complete (every Cauchy sequence converges)
|
||||||
|
\item $X$ is a metric space with metric $d_X$
|
||||||
|
\item $X$ has a Borel probability measure $\mu_X$
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
Then $(X,d_X,\mu_X)$ is a \textbf{metric measure space}.
|
||||||
|
\end{defn}
|
||||||
|
|
||||||
|
\begin{defn}
|
||||||
|
\label{defn:diameter}
|
||||||
|
|
||||||
|
Let $(X,d_X)$ be a metric space. The \textbf{diameter} of a set $A\subset X$ is defined as
|
||||||
|
$$
|
||||||
|
\diam(A)=\sup_{x,y\in A}d_X(x,y).
|
||||||
|
$$
|
||||||
|
\end{defn}
|
||||||
|
|
||||||
|
\begin{defn}
|
||||||
|
\label{defn:partial-diameter}
|
||||||
|
|
||||||
|
Let $(X,d_X,\mu_X)$ be a metric measure space, For any real number $\alpha\leq 1$, the \textbf{partial diameter} of $X$ is defined as
|
||||||
|
$$
|
||||||
|
\diam(A;\alpha)=\inf_{A\subseteq X|\mu_X(A)\geq \alpha}\diam(A).
|
||||||
|
$$
|
||||||
|
\end{defn}
|
||||||
|
|
||||||
|
This definition generalize the relation between the measure and metric in the metric-measure space. Intuitively, the space with smaller partial diameter can take more volume with the same diameter constrains.
|
||||||
|
|
||||||
|
However, in higher dimensions, the volume may tend to concentrates more around a small neighborhood of the set, as we see in previous chapters with high dimensional sphere as example. We can safely cut $\kappa>0$ volume to significantly reduce the diameter, this yields better measure for concentration for shapes in spaces with high dimension.
|
||||||
|
|
||||||
|
|
||||||
|
\begin{defn}
|
||||||
|
\label{defn:observable-diameter}
|
||||||
|
Let $X$ be a metric-measure space, $Y$ be a metric space, and $f:X\to Y$ be a 1-Lipschitz function. Then $f_*\mu_X=\mu_Y$ is a push forward measure on $Y$.
|
||||||
|
|
||||||
|
For any real number $\kappa>0$, the \textbf{$\kappa$-observable diameter with screen $Y$} is defined as
|
||||||
|
|
||||||
|
$$
|
||||||
|
\obdiam_Y(X;\kappa)=\sup\{\diam(f_*\mu_X;1-\kappa)\}
|
||||||
|
$$
|
||||||
|
|
||||||
|
And the \textbf{obbservable diameter with screen $Y$} is defined as
|
||||||
|
|
||||||
|
$$
|
||||||
|
\obdiam_Y(X)=\inf_{\kappa>0}\max\{\obdiam_Y(X;\kappa)\}
|
||||||
|
$$
|
||||||
|
|
||||||
|
If $Y=\R$, we call it the \textbf{observable diameter}.
|
||||||
|
|
||||||
|
\end{defn}
|
||||||
|
|
||||||
|
If we collapse it naively via
|
||||||
|
$$
|
||||||
|
\inf_{\kappa>0}\obdiam_Y(X;\kappa),
|
||||||
|
$$
|
||||||
|
we typically get something degenerate: as $\kappa\to 1$, the condition ``mass $\ge 1-\kappa$'' becomes almost empty space, so $\diam(\nu;1-\kappa)$ can be forced to be $0$ (take a tiny set of positive mass), and hence the infimum tends to $0$ for essentially any non-atomic space.
|
||||||
|
|
||||||
|
This is why one either:
|
||||||
|
\begin{enumerate}
|
||||||
|
\item keeps $\obdiam_Y(X;\kappa)$ as a \emph{function of $\kappa$} (picking $\kappa$ to be small but not $0$), or
|
||||||
|
\item if one insists on a single number, balances ``spread'' against ``exceptional mass'' by defining $\obdiam_Y(X)=\inf_{\kappa>0}\max\{\obdiam_Y(X;\kappa),\kappa\}$ as above.
|
||||||
|
\end{enumerate}
|
||||||
|
The point of the $\max\{\cdot,\kappa\}$ is that it prevents cheating by taking $\kappa$ close to $1$: if $\kappa$ is large then the maximum is large regardless of how small $\obdiam_Y(X;\kappa)$ is, so the infimum is forced to occur where the exceptional mass and the observable spread are small.
|
||||||
|
|
||||||
|
Few additional proposition in \cite{shioya2014metricmeasuregeometry} will help us to estimate the observable diameter for complex projective spaces.
|
||||||
|
|
||||||
|
\begin{prop}
|
||||||
|
Let $X$ and $Y$ be two metric-measure spaces and $\kappa>0$, and let $f:Y\to X$ be a 1-Lipschitz function ($Y$ dominates $X$, denoted as $X\prec Y$) then:
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item
|
||||||
|
$$
|
||||||
|
\diam(X,1-\kappa)\leq \diam(Y,1-\kappa)
|
||||||
|
$$
|
||||||
|
\item $\obdiam(X;-\kappa)\leq \diam(X;1-\kappa)$, and $\obdiam(X)$ is finite.
|
||||||
|
\item
|
||||||
|
$$
|
||||||
|
\obdiam(X;-kappa)\leq \obdiam(Y;-kappa)
|
||||||
|
$$
|
||||||
|
\end{enumerate}
|
||||||
|
\end{prop}
|
||||||
|
|
||||||
|
\begin{proof}
|
||||||
|
Since $f$ is 1-Lipschitz, we have $f_*\mu Y=\mu_X$. Let $A$ be any Borel set of $Y$ with $\mu_Y(A)\geq 1-\kappa$ and $\overline{f(A)}$ be the closure of $f(A)$ in $X$. We have $\mu_X(\overline{f(A)})=\mu_Y(f^{-1}(\overline{f(A)}))\geq \mu_Y(A)\geq 1-\kappa$ and by the 1-lipschitz property, $\diam(\overline{f(A)})\leq \diam(A)$, so $\diam(X;1-\kappa)\leq \diam(A)\leq \diam(Y;1-\kappa)$.
|
||||||
|
|
||||||
|
Let $g:X\to \R$ be any 1-lipschitz function, since $(\R,|\cdot|,g_*\mu_X)$ is dominated by $X$, $\diam(\R;1-\kappa)\leq \diam(X;1-\kappa)$. Therefore, $\obdiam(X;-\kappa)\leq \diam(X;1-\kappa)$.
|
||||||
|
|
||||||
|
and
|
||||||
|
$$
|
||||||
|
\diam(g_*\mu_X;1-\kappa)\leq \diam((f\circ g)_*\mu_Y;1-\kappa)\leq \obdiam(Y;1-\kappa)
|
||||||
|
$$
|
||||||
|
\end{proof}
|
||||||
|
|
||||||
\ifSubfilesClassLoaded{
|
\ifSubfilesClassLoaded{
|
||||||
\printbibliography[title={References}]
|
\printbibliography[title={References}]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,48 +1,48 @@
|
|||||||
"""
|
"""
|
||||||
plot the probability of the entropy of the reduced density matrix of the pure state being greater than log2(d_A) - alpha - beta
|
plot the probability of the entropy of the reduced density matrix of the pure state being greater than log2(d_A) - alpha - beta
|
||||||
for different alpha values
|
for different alpha values
|
||||||
|
|
||||||
IGNORE THE CONSTANT C
|
IGNORE THE CONSTANT C
|
||||||
|
|
||||||
NOTE there is bug in the program, You should fix it if you want to use the visualization, it relates to the alpha range and you should not plot the prob of 0
|
NOTE there is bug in the program, You should fix it if you want to use the visualization, it relates to the alpha range and you should not plot the prob of 0
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
from quantum_states import sample_and_calculate
|
from quantum_states import sample_and_calculate
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
|
|
||||||
# Set dimensions
|
# Set dimensions
|
||||||
db = 16
|
db = 16
|
||||||
da_values = [8, 16, 32]
|
da_values = [8, 16, 32]
|
||||||
alpha_range = np.linspace(0, 2, 100) # Range of alpha values to plot
|
alpha_range = np.linspace(0, 2, 100) # Range of alpha values to plot
|
||||||
n_samples = 100000
|
n_samples = 100000
|
||||||
|
|
||||||
plt.figure(figsize=(10, 6))
|
plt.figure(figsize=(10, 6))
|
||||||
|
|
||||||
for da in tqdm(da_values, desc="Processing d_A values"):
|
for da in tqdm(da_values, desc="Processing d_A values"):
|
||||||
# Calculate beta according to the formula
|
# Calculate beta according to the formula
|
||||||
beta = da / (np.log(2) * db)
|
beta = da / (np.log(2) * db)
|
||||||
|
|
||||||
# Calculate probability for each alpha
|
# Calculate probability for each alpha
|
||||||
predicted_probabilities = []
|
predicted_probabilities = []
|
||||||
actual_probabilities = []
|
actual_probabilities = []
|
||||||
for alpha in tqdm(alpha_range, desc=f"Calculating probabilities for d_A={da}", leave=False):
|
for alpha in tqdm(alpha_range, desc=f"Calculating probabilities for d_A={da}", leave=False):
|
||||||
# Calculate probability according to the formula
|
# Calculate probability according to the formula
|
||||||
# Ignoring constant C as requested
|
# Ignoring constant C as requested
|
||||||
prob = np.exp(-(da * db - 1) * alpha**2 / (np.log2(da))**2)
|
prob = np.exp(-(da * db - 1) * alpha**2 / (np.log2(da))**2)
|
||||||
predicted_probabilities.append(prob)
|
predicted_probabilities.append(prob)
|
||||||
# Calculate actual probability
|
# Calculate actual probability
|
||||||
entropies = sample_and_calculate(da, db, n_samples=n_samples)
|
entropies = sample_and_calculate(da, db, n_samples=n_samples)
|
||||||
actual_probabilities.append(np.sum(entropies > np.log2(da) - alpha - beta) / n_samples)
|
actual_probabilities.append(np.sum(entropies > np.log2(da) - alpha - beta) / n_samples)
|
||||||
|
|
||||||
# plt.plot(alpha_range, predicted_probabilities, label=f'$d_A={da}$', linestyle='--')
|
# plt.plot(alpha_range, predicted_probabilities, label=f'$d_A={da}$', linestyle='--')
|
||||||
plt.plot(alpha_range, actual_probabilities, label=f'$d_A={da}$', linestyle='-')
|
plt.plot(alpha_range, actual_probabilities, label=f'$d_A={da}$', linestyle='-')
|
||||||
|
|
||||||
plt.xlabel(r'$\alpha$')
|
plt.xlabel(r'$\alpha$')
|
||||||
plt.ylabel('Probability')
|
plt.ylabel('Probability')
|
||||||
plt.title(r'$\operatorname{Pr}[H(\psi_A) <\log_2(d_A)-\alpha-\beta]$ vs $\alpha$ for different $d_A$')
|
plt.title(r'$\operatorname{Pr}[H(\psi_A) <\log_2(d_A)-\alpha-\beta]$ vs $\alpha$ for different $d_A$')
|
||||||
plt.legend()
|
plt.legend()
|
||||||
plt.grid(True)
|
plt.grid(True)
|
||||||
plt.yscale('log') # Use log scale for better visualization
|
plt.yscale('log') # Use log scale for better visualization
|
||||||
plt.show()
|
plt.show()
|
||||||
|
|||||||
@@ -1,52 +1,52 @@
|
|||||||
"""
|
"""
|
||||||
plot the probability of the entropy of the reduced density matrix of the pure state being greater than log2(d_A) - alpha - beta
|
plot the probability of the entropy of the reduced density matrix of the pure state being greater than log2(d_A) - alpha - beta
|
||||||
|
|
||||||
for different d_A values, with fixed alpha and d_B Note, d_B>d_A
|
for different d_A values, with fixed alpha and d_B Note, d_B>d_A
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
from quantum_states import sample_and_calculate
|
from quantum_states import sample_and_calculate
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
|
|
||||||
# Set dimensions
|
# Set dimensions
|
||||||
db = 32
|
db = 32
|
||||||
alpha = 0
|
alpha = 0
|
||||||
da_range = np.arange(2, 10, 1) # Range of d_A values to plot
|
da_range = np.arange(2, 10, 1) # Range of d_A values to plot
|
||||||
n_samples = 1000000
|
n_samples = 1000000
|
||||||
|
|
||||||
plt.figure(figsize=(10, 6))
|
plt.figure(figsize=(10, 6))
|
||||||
|
|
||||||
predicted_probabilities = []
|
predicted_probabilities = []
|
||||||
actual_probabilities = []
|
actual_probabilities = []
|
||||||
|
|
||||||
for da in tqdm(da_range, desc="Processing d_A values"):
|
for da in tqdm(da_range, desc="Processing d_A values"):
|
||||||
# Calculate beta according to the formula
|
# Calculate beta according to the formula
|
||||||
beta = da / (np.log(2) * db)
|
beta = da / (np.log(2) * db)
|
||||||
|
|
||||||
# Calculate probability according to the formula
|
# Calculate probability according to the formula
|
||||||
# Ignoring constant C as requested
|
# Ignoring constant C as requested
|
||||||
prob = np.exp(-((da * db - 1) * alpha**2 / (np.log2(da)**2)))
|
prob = np.exp(-((da * db - 1) * alpha**2 / (np.log2(da)**2)))
|
||||||
predicted_probabilities.append(prob)
|
predicted_probabilities.append(prob)
|
||||||
# Calculate actual probability
|
# Calculate actual probability
|
||||||
entropies = sample_and_calculate(da, db, n_samples=n_samples)
|
entropies = sample_and_calculate(da, db, n_samples=n_samples)
|
||||||
count = np.sum(entropies < np.log2(da) - alpha - beta)
|
count = np.sum(entropies < np.log2(da) - alpha - beta)
|
||||||
# early stop if count is 0
|
# early stop if count is 0
|
||||||
if count != 0:
|
if count != 0:
|
||||||
actual_probabilities.append(count / n_samples)
|
actual_probabilities.append(count / n_samples)
|
||||||
else:
|
else:
|
||||||
actual_probabilities.extend([np.nan] * (len(da_range) - len(actual_probabilities)))
|
actual_probabilities.extend([np.nan] * (len(da_range) - len(actual_probabilities)))
|
||||||
break
|
break
|
||||||
# debug
|
# debug
|
||||||
print(f'da={da}, theoretical_prob={prob}, threshold={np.log2(da) - alpha - beta}, actual_prob={actual_probabilities[-1]}, entropy_heads={entropies[:10]}')
|
print(f'da={da}, theoretical_prob={prob}, threshold={np.log2(da) - alpha - beta}, actual_prob={actual_probabilities[-1]}, entropy_heads={entropies[:10]}')
|
||||||
|
|
||||||
# plt.plot(da_range, predicted_probabilities, label=f'$d_A={da}$', linestyle='--')
|
# plt.plot(da_range, predicted_probabilities, label=f'$d_A={da}$', linestyle='--')
|
||||||
plt.plot(da_range, actual_probabilities, label=f'$d_A={da}$', linestyle='-')
|
plt.plot(da_range, actual_probabilities, label=f'$d_A={da}$', linestyle='-')
|
||||||
|
|
||||||
plt.xlabel(r'$d_A$')
|
plt.xlabel(r'$d_A$')
|
||||||
plt.ylabel('Probability')
|
plt.ylabel('Probability')
|
||||||
plt.title(r'$\operatorname{Pr}[H(\psi_A) < \log_2(d_A)-\alpha-\beta]$ vs $d_A$ for fixed $\alpha=$'+str(alpha)+r' and $d_B=$' +str(db)+ r' with $n=$' +str(n_samples))
|
plt.title(r'$\operatorname{Pr}[H(\psi_A) < \log_2(d_A)-\alpha-\beta]$ vs $d_A$ for fixed $\alpha=$'+str(alpha)+r' and $d_B=$' +str(db)+ r' with $n=$' +str(n_samples))
|
||||||
# plt.legend()
|
# plt.legend()
|
||||||
plt.grid(True)
|
plt.grid(True)
|
||||||
plt.yscale('log') # Use log scale for better visualization
|
plt.yscale('log') # Use log scale for better visualization
|
||||||
plt.show()
|
plt.show()
|
||||||
|
|||||||
@@ -1,55 +1,55 @@
|
|||||||
import numpy as np
|
import numpy as np
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
from quantum_states import sample_and_calculate
|
from quantum_states import sample_and_calculate
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
|
|
||||||
# Set dimensions, keep db\geq da\geq 3
|
# Set dimensions, keep db\geq da\geq 3
|
||||||
db = 64
|
db = 64
|
||||||
da_values = [4, 8, 16, 32]
|
da_values = [4, 8, 16, 32]
|
||||||
da_colors = ['b', 'g', 'r', 'c']
|
da_colors = ['b', 'g', 'r', 'c']
|
||||||
n_samples = 100000
|
n_samples = 100000
|
||||||
|
|
||||||
plt.figure(figsize=(10, 6))
|
plt.figure(figsize=(10, 6))
|
||||||
|
|
||||||
# Define range of deviations to test (in bits)
|
# Define range of deviations to test (in bits)
|
||||||
deviations = np.linspace(0, 1, 50) # Test deviations from 0 to 1 bits
|
deviations = np.linspace(0, 1, 50) # Test deviations from 0 to 1 bits
|
||||||
|
|
||||||
for i, da in enumerate(tqdm(da_values, desc="Processing d_A values")):
|
for i, da in enumerate(tqdm(da_values, desc="Processing d_A values")):
|
||||||
# Calculate maximal entropy
|
# Calculate maximal entropy
|
||||||
max_entropy = np.log2(min(da, db))
|
max_entropy = np.log2(min(da, db))
|
||||||
|
|
||||||
# Sample random states and calculate their entropies
|
# Sample random states and calculate their entropies
|
||||||
entropies = sample_and_calculate(da, db, n_samples=n_samples)
|
entropies = sample_and_calculate(da, db, n_samples=n_samples)
|
||||||
|
|
||||||
# Calculate probabilities for each deviation
|
# Calculate probabilities for each deviation
|
||||||
probabilities = []
|
probabilities = []
|
||||||
theoretical_probs = []
|
theoretical_probs = []
|
||||||
for dev in deviations:
|
for dev in deviations:
|
||||||
# Count states that deviate by more than dev bits from max entropy
|
# Count states that deviate by more than dev bits from max entropy
|
||||||
count = np.sum(max_entropy - entropies > dev)
|
count = np.sum(max_entropy - entropies > dev)
|
||||||
# Omit the case where count is 0
|
# Omit the case where count is 0
|
||||||
if count != 0:
|
if count != 0:
|
||||||
prob = count / len(entropies)
|
prob = count / len(entropies)
|
||||||
probabilities.append(prob)
|
probabilities.append(prob)
|
||||||
else:
|
else:
|
||||||
probabilities.append(np.nan)
|
probabilities.append(np.nan)
|
||||||
|
|
||||||
# Calculate theoretical probability using concentration inequality
|
# Calculate theoretical probability using concentration inequality
|
||||||
# note max_entropy - dev = max_entropy - beta - alpha, so alpha = dev - beta
|
# note max_entropy - dev = max_entropy - beta - alpha, so alpha = dev - beta
|
||||||
beta = da / (np.log(2)*db)
|
beta = da / (np.log(2)*db)
|
||||||
alpha = dev - beta
|
alpha = dev - beta
|
||||||
theoretical_prob = np.exp(-(da * db - 1) * alpha**2 / (np.log2(da))**2)
|
theoretical_prob = np.exp(-(da * db - 1) * alpha**2 / (np.log2(da))**2)
|
||||||
# # debug
|
# # debug
|
||||||
# print(f"dev: {dev}, beta: {beta}, alpha: {alpha}, theoretical_prob: {theoretical_prob}")
|
# print(f"dev: {dev}, beta: {beta}, alpha: {alpha}, theoretical_prob: {theoretical_prob}")
|
||||||
theoretical_probs.append(theoretical_prob)
|
theoretical_probs.append(theoretical_prob)
|
||||||
|
|
||||||
plt.plot(deviations, probabilities, '-', label=f'$d_A={da}$ (simulated)', color=da_colors[i])
|
plt.plot(deviations, probabilities, '-', label=f'$d_A={da}$ (simulated)', color=da_colors[i])
|
||||||
plt.plot(deviations, theoretical_probs, '--', label=f'$d_A={da}$ (theoretical)', color=da_colors[i])
|
plt.plot(deviations, theoretical_probs, '--', label=f'$d_A={da}$ (theoretical)', color=da_colors[i])
|
||||||
|
|
||||||
plt.xlabel('Deviation from maximal entropy (bits)')
|
plt.xlabel('Deviation from maximal entropy (bits)')
|
||||||
plt.ylabel('Probability')
|
plt.ylabel('Probability')
|
||||||
plt.title(f'Probability of deviation from maximal entropy simulation with sample size {n_samples} for $d_B={db}$ ignoring the constant $C$')
|
plt.title(f'Probability of deviation from maximal entropy simulation with sample size {n_samples} for $d_B={db}$ ignoring the constant $C$')
|
||||||
plt.legend()
|
plt.legend()
|
||||||
plt.grid(True)
|
plt.grid(True)
|
||||||
plt.yscale('log') # Use log scale for better visualization
|
plt.yscale('log') # Use log scale for better visualization
|
||||||
plt.show()
|
plt.show()
|
||||||
|
|||||||
@@ -1,33 +1,33 @@
|
|||||||
import numpy as np
|
import numpy as np
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
from quantum_states import sample_and_calculate
|
from quantum_states import sample_and_calculate
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
|
|
||||||
# Define range of dimensions to test
|
# Define range of dimensions to test
|
||||||
fixed_dim = 64
|
fixed_dim = 64
|
||||||
dimensions = np.arange(2, 64, 2) # Test dimensions from 2 to 50 in steps of 2
|
dimensions = np.arange(2, 64, 2) # Test dimensions from 2 to 50 in steps of 2
|
||||||
expected_entropies = []
|
expected_entropies = []
|
||||||
theoretical_entropies = []
|
theoretical_entropies = []
|
||||||
predicted_entropies = []
|
predicted_entropies = []
|
||||||
|
|
||||||
# Calculate entropies for each dimension
|
# Calculate entropies for each dimension
|
||||||
for dim in tqdm(dimensions, desc="Calculating entropies"):
|
for dim in tqdm(dimensions, desc="Calculating entropies"):
|
||||||
# For each dimension, we'll keep one subsystem fixed at dim=2
|
# For each dimension, we'll keep one subsystem fixed at dim=2
|
||||||
# and vary the other dimension
|
# and vary the other dimension
|
||||||
entropies = sample_and_calculate(dim, fixed_dim, n_samples=1000)
|
entropies = sample_and_calculate(dim, fixed_dim, n_samples=1000)
|
||||||
expected_entropies.append(np.mean(entropies))
|
expected_entropies.append(np.mean(entropies))
|
||||||
theoretical_entropies.append(np.log2(min(dim, fixed_dim)))
|
theoretical_entropies.append(np.log2(min(dim, fixed_dim)))
|
||||||
beta = min(dim, fixed_dim)/(2*np.log(2)*max(dim, fixed_dim))
|
beta = min(dim, fixed_dim)/(2*np.log(2)*max(dim, fixed_dim))
|
||||||
predicted_entropies.append(np.log2(min(dim, fixed_dim)) - beta)
|
predicted_entropies.append(np.log2(min(dim, fixed_dim)) - beta)
|
||||||
|
|
||||||
# Create the plot
|
# Create the plot
|
||||||
plt.figure(figsize=(10, 6))
|
plt.figure(figsize=(10, 6))
|
||||||
plt.plot(dimensions, expected_entropies, 'b-', label='Expected Entropy')
|
plt.plot(dimensions, expected_entropies, 'b-', label='Expected Entropy')
|
||||||
plt.plot(dimensions, theoretical_entropies, 'r--', label='Theoretical Entropy')
|
plt.plot(dimensions, theoretical_entropies, 'r--', label='Theoretical Entropy')
|
||||||
plt.plot(dimensions, predicted_entropies, 'g--', label='Predicted Entropy')
|
plt.plot(dimensions, predicted_entropies, 'g--', label='Predicted Entropy')
|
||||||
plt.xlabel('Dimension of Subsystem B')
|
plt.xlabel('Dimension of Subsystem B')
|
||||||
plt.ylabel('von Neumann Entropy (bits)')
|
plt.ylabel('von Neumann Entropy (bits)')
|
||||||
plt.title(f'von Neumann Entropy vs. System Dimension, with Dimension of Subsystem A = {fixed_dim}')
|
plt.title(f'von Neumann Entropy vs. System Dimension, with Dimension of Subsystem A = {fixed_dim}')
|
||||||
plt.legend()
|
plt.legend()
|
||||||
plt.grid(True)
|
plt.grid(True)
|
||||||
plt.show()
|
plt.show()
|
||||||
|
|||||||
@@ -1,51 +1,51 @@
|
|||||||
import numpy as np
|
import numpy as np
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
from quantum_states import sample_and_calculate
|
from quantum_states import sample_and_calculate
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
from mpl_toolkits.mplot3d import Axes3D
|
from mpl_toolkits.mplot3d import Axes3D
|
||||||
|
|
||||||
# Define range of dimensions to test
|
# Define range of dimensions to test
|
||||||
dimensionsA = np.arange(2, 64, 2) # Test dimensions from 2 to 50 in steps of 2
|
dimensionsA = np.arange(2, 64, 2) # Test dimensions from 2 to 50 in steps of 2
|
||||||
dimensionsB = np.arange(2, 64, 2) # Test dimensions from 2 to 50 in steps of 2
|
dimensionsB = np.arange(2, 64, 2) # Test dimensions from 2 to 50 in steps of 2
|
||||||
|
|
||||||
# Create meshgrid for 3D plot
|
# Create meshgrid for 3D plot
|
||||||
X, Y = np.meshgrid(dimensionsA, dimensionsB)
|
X, Y = np.meshgrid(dimensionsA, dimensionsB)
|
||||||
Z = np.zeros_like(X, dtype=float)
|
Z = np.zeros_like(X, dtype=float)
|
||||||
|
|
||||||
# Calculate entropies for each dimension combination
|
# Calculate entropies for each dimension combination
|
||||||
total_iterations = len(dimensionsA) * len(dimensionsB)
|
total_iterations = len(dimensionsA) * len(dimensionsB)
|
||||||
pbar = tqdm(total=total_iterations, desc="Calculating entropies")
|
pbar = tqdm(total=total_iterations, desc="Calculating entropies")
|
||||||
|
|
||||||
for i, dim_a in enumerate(dimensionsA):
|
for i, dim_a in enumerate(dimensionsA):
|
||||||
for j, dim_b in enumerate(dimensionsB):
|
for j, dim_b in enumerate(dimensionsB):
|
||||||
entropies = sample_and_calculate(dim_a, dim_b, n_samples=100)
|
entropies = sample_and_calculate(dim_a, dim_b, n_samples=100)
|
||||||
Z[j,i] = np.mean(entropies)
|
Z[j,i] = np.mean(entropies)
|
||||||
pbar.update(1)
|
pbar.update(1)
|
||||||
pbar.close()
|
pbar.close()
|
||||||
|
|
||||||
# Create the 3D plot
|
# Create the 3D plot
|
||||||
fig = plt.figure(figsize=(12, 8))
|
fig = plt.figure(figsize=(12, 8))
|
||||||
ax = fig.add_subplot(111, projection='3d')
|
ax = fig.add_subplot(111, projection='3d')
|
||||||
|
|
||||||
# Plot the surface
|
# Plot the surface
|
||||||
surf = ax.plot_surface(X, Y, Z, cmap='viridis')
|
surf = ax.plot_surface(X, Y, Z, cmap='viridis')
|
||||||
|
|
||||||
# Add labels and title with larger font sizes
|
# Add labels and title with larger font sizes
|
||||||
ax.set_xlabel('Dimension of Subsystem A', fontsize=12, labelpad=10)
|
ax.set_xlabel('Dimension of Subsystem A', fontsize=12, labelpad=10)
|
||||||
ax.set_ylabel('Dimension of Subsystem B', fontsize=12, labelpad=10)
|
ax.set_ylabel('Dimension of Subsystem B', fontsize=12, labelpad=10)
|
||||||
ax.set_zlabel('von Neumann Entropy (bits)', fontsize=12, labelpad=10)
|
ax.set_zlabel('von Neumann Entropy (bits)', fontsize=12, labelpad=10)
|
||||||
ax.set_title('von Neumann Entropy vs. System Dimensions', fontsize=14, pad=20)
|
ax.set_title('von Neumann Entropy vs. System Dimensions', fontsize=14, pad=20)
|
||||||
|
|
||||||
# Add colorbar
|
# Add colorbar
|
||||||
cbar = fig.colorbar(surf, ax=ax, label='Entropy')
|
cbar = fig.colorbar(surf, ax=ax, label='Entropy')
|
||||||
cbar.ax.set_ylabel('Entropy', fontsize=12)
|
cbar.ax.set_ylabel('Entropy', fontsize=12)
|
||||||
|
|
||||||
# Add tick labels with larger font size
|
# Add tick labels with larger font size
|
||||||
ax.tick_params(axis='x', labelsize=10)
|
ax.tick_params(axis='x', labelsize=10)
|
||||||
ax.tick_params(axis='y', labelsize=10)
|
ax.tick_params(axis='y', labelsize=10)
|
||||||
ax.tick_params(axis='z', labelsize=10)
|
ax.tick_params(axis='z', labelsize=10)
|
||||||
|
|
||||||
# Rotate the plot for better visibility
|
# Rotate the plot for better visibility
|
||||||
ax.view_init(elev=30, azim=45)
|
ax.view_init(elev=30, azim=45)
|
||||||
|
|
||||||
plt.show()
|
plt.show()
|
||||||
@@ -1,96 +1,96 @@
|
|||||||
import numpy as np
|
import numpy as np
|
||||||
from scipy.linalg import sqrtm
|
from scipy.linalg import sqrtm
|
||||||
from scipy.stats import unitary_group
|
from scipy.stats import unitary_group
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
|
|
||||||
def random_pure_state(dim_a, dim_b):
|
def random_pure_state(dim_a, dim_b):
|
||||||
"""
|
"""
|
||||||
Generate a random pure state for a bipartite system.
|
Generate a random pure state for a bipartite system.
|
||||||
|
|
||||||
The random pure state is uniformly distributed by the Haar (Fubini-Study) measure on the unit sphere $S^{dim_a * dim_b - 1}$. (Invariant under the unitary group $U(dim_a) \times U(dim_b)$)
|
The random pure state is uniformly distributed by the Haar (Fubini-Study) measure on the unit sphere $S^{dim_a * dim_b - 1}$. (Invariant under the unitary group $U(dim_a) \times U(dim_b)$)
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
dim_a (int): Dimension of subsystem A
|
dim_a (int): Dimension of subsystem A
|
||||||
dim_b (int): Dimension of subsystem B
|
dim_b (int): Dimension of subsystem B
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
numpy.ndarray: Random pure state vector of shape (dim_a * dim_b,)
|
numpy.ndarray: Random pure state vector of shape (dim_a * dim_b,)
|
||||||
"""
|
"""
|
||||||
# Total dimension of the composite system
|
# Total dimension of the composite system
|
||||||
dim_total = dim_a * dim_b
|
dim_total = dim_a * dim_b
|
||||||
|
|
||||||
# Generate non-zero random complex vector
|
# Generate non-zero random complex vector
|
||||||
while True:
|
while True:
|
||||||
state = np.random.normal(size=(dim_total,)) + 1j * np.random.normal(size=(dim_total,))
|
state = np.random.normal(size=(dim_total,)) + 1j * np.random.normal(size=(dim_total,))
|
||||||
if np.linalg.norm(state) > 0:
|
if np.linalg.norm(state) > 0:
|
||||||
break
|
break
|
||||||
|
|
||||||
# Normalize the state
|
# Normalize the state
|
||||||
state = state / np.linalg.norm(state)
|
state = state / np.linalg.norm(state)
|
||||||
|
|
||||||
return state
|
return state
|
||||||
|
|
||||||
def von_neumann_entropy_bipartite_pure_state(state, dim_a, dim_b):
|
def von_neumann_entropy_bipartite_pure_state(state, dim_a, dim_b):
|
||||||
"""
|
"""
|
||||||
Calculate the von Neumann entropy of the reduced density matrix.
|
Calculate the von Neumann entropy of the reduced density matrix.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
state (numpy.ndarray): Pure state vector
|
state (numpy.ndarray): Pure state vector
|
||||||
dim_a (int): Dimension of subsystem A
|
dim_a (int): Dimension of subsystem A
|
||||||
dim_b (int): Dimension of subsystem B
|
dim_b (int): Dimension of subsystem B
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
float: Von Neumann entropy
|
float: Von Neumann entropy
|
||||||
"""
|
"""
|
||||||
# Reshape state vector to matrix form
|
# Reshape state vector to matrix form
|
||||||
state_matrix = state.reshape(dim_a, dim_b)
|
state_matrix = state.reshape(dim_a, dim_b)
|
||||||
|
|
||||||
# Calculate reduced density matrix of subsystem A
|
# Calculate reduced density matrix of subsystem A
|
||||||
rho_a = np.dot(state_matrix, state_matrix.conj().T)
|
rho_a = np.dot(state_matrix, state_matrix.conj().T)
|
||||||
|
|
||||||
# Calculate eigenvalues
|
# Calculate eigenvalues
|
||||||
eigenvals = np.linalg.eigvalsh(rho_a)
|
eigenvals = np.linalg.eigvalsh(rho_a)
|
||||||
|
|
||||||
# Remove very small eigenvalues (numerical errors)
|
# Remove very small eigenvalues (numerical errors)
|
||||||
eigenvals = eigenvals[eigenvals > 1e-15]
|
eigenvals = eigenvals[eigenvals > 1e-15]
|
||||||
|
|
||||||
# Calculate von Neumann entropy
|
# Calculate von Neumann entropy
|
||||||
entropy = -np.sum(eigenvals * np.log2(eigenvals))
|
entropy = -np.sum(eigenvals * np.log2(eigenvals))
|
||||||
|
|
||||||
return np.real(entropy)
|
return np.real(entropy)
|
||||||
|
|
||||||
def sample_and_calculate(dim_a, dim_b, n_samples=1000):
|
def sample_and_calculate(dim_a, dim_b, n_samples=1000):
|
||||||
"""
|
"""
|
||||||
Sample random pure states (generate random co) and calculate their von Neumann entropy.
|
Sample random pure states (generate random co) and calculate their von Neumann entropy.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
dim_a (int): Dimension of subsystem A
|
dim_a (int): Dimension of subsystem A
|
||||||
dim_b (int): Dimension of subsystem B
|
dim_b (int): Dimension of subsystem B
|
||||||
n_samples (int): Number of samples to generate
|
n_samples (int): Number of samples to generate
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
numpy.ndarray: Array of entropy values
|
numpy.ndarray: Array of entropy values
|
||||||
"""
|
"""
|
||||||
entropies = np.zeros(n_samples)
|
entropies = np.zeros(n_samples)
|
||||||
|
|
||||||
for i in tqdm(range(n_samples), desc=f"Sampling states (d_A={dim_a}, d_B={dim_b})", leave=False):
|
for i in tqdm(range(n_samples), desc=f"Sampling states (d_A={dim_a}, d_B={dim_b})", leave=False):
|
||||||
state = random_pure_state(dim_a, dim_b)
|
state = random_pure_state(dim_a, dim_b)
|
||||||
entropies[i] = von_neumann_entropy_bipartite_pure_state(state, dim_a, dim_b)
|
entropies[i] = von_neumann_entropy_bipartite_pure_state(state, dim_a, dim_b)
|
||||||
|
|
||||||
return entropies
|
return entropies
|
||||||
|
|
||||||
# Example usage:
|
# Example usage:
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
# Example: 2-qubit system
|
# Example: 2-qubit system
|
||||||
dim_a, dim_b = 50,100
|
dim_a, dim_b = 50,100
|
||||||
|
|
||||||
# Generate single random state and calculate entropy
|
# Generate single random state and calculate entropy
|
||||||
state = random_pure_state(dim_a, dim_b)
|
state = random_pure_state(dim_a, dim_b)
|
||||||
entropy = von_neumann_entropy_bipartite_pure_state(state, dim_a, dim_b)
|
entropy = von_neumann_entropy_bipartite_pure_state(state, dim_a, dim_b)
|
||||||
print(f"Single state entropy: {entropy}")
|
print(f"Single state entropy: {entropy}")
|
||||||
|
|
||||||
# Sample multiple states
|
# Sample multiple states
|
||||||
entropies = sample_and_calculate(dim_a, dim_b, n_samples=1000)
|
entropies = sample_and_calculate(dim_a, dim_b, n_samples=1000)
|
||||||
print(f"Expected entropy: {np.mean(entropies)}")
|
print(f"Expected entropy: {np.mean(entropies)}")
|
||||||
print(f"Theoretical entropy: {np.log2(max(dim_a, dim_b))}")
|
print(f"Theoretical entropy: {np.log2(max(dim_a, dim_b))}")
|
||||||
print(f"Standard deviation: {np.std(entropies)}")
|
print(f"Standard deviation: {np.std(entropies)}")
|
||||||
|
|||||||
@@ -1,32 +1,32 @@
|
|||||||
# unit test for the functions in quantum_states.py
|
# unit test for the functions in quantum_states.py
|
||||||
|
|
||||||
import unittest
|
import unittest
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from quantum_states import random_pure_state, von_neumann_entropy_bipartite_pure_state
|
from quantum_states import random_pure_state, von_neumann_entropy_bipartite_pure_state
|
||||||
|
|
||||||
class LearningCase(unittest.TestCase):
|
class LearningCase(unittest.TestCase):
|
||||||
def test_random_pure_state_shape_and_norm(self):
|
def test_random_pure_state_shape_and_norm(self):
|
||||||
dim_a = 2
|
dim_a = 2
|
||||||
dim_b = 2
|
dim_b = 2
|
||||||
state = random_pure_state(dim_a, dim_b)
|
state = random_pure_state(dim_a, dim_b)
|
||||||
self.assertEqual(state.shape, (dim_a * dim_b,))
|
self.assertEqual(state.shape, (dim_a * dim_b,))
|
||||||
self.assertAlmostEqual(np.linalg.norm(state), 1)
|
self.assertAlmostEqual(np.linalg.norm(state), 1)
|
||||||
|
|
||||||
def test_partial_trace_entropy(self):
|
def test_partial_trace_entropy(self):
|
||||||
dim_a = 2
|
dim_a = 2
|
||||||
dim_b = 2
|
dim_b = 2
|
||||||
state = random_pure_state(dim_a, dim_b)
|
state = random_pure_state(dim_a, dim_b)
|
||||||
self.assertAlmostEqual(von_neumann_entropy_bipartite_pure_state(state, dim_a, dim_b), von_neumann_entropy_bipartite_pure_state(state, dim_b, dim_a))
|
self.assertAlmostEqual(von_neumann_entropy_bipartite_pure_state(state, dim_a, dim_b), von_neumann_entropy_bipartite_pure_state(state, dim_b, dim_a))
|
||||||
|
|
||||||
def test_sample_uniformly(self):
|
def test_sample_uniformly(self):
|
||||||
# calculate the distribution of the random pure state
|
# calculate the distribution of the random pure state
|
||||||
dim_a = 2
|
dim_a = 2
|
||||||
dim_b = 2
|
dim_b = 2
|
||||||
state = random_pure_state(dim_a, dim_b)
|
state = random_pure_state(dim_a, dim_b)
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
unittest.main()
|
unittest.main()
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
9
main.tex
9
main.tex
@@ -75,6 +75,12 @@
|
|||||||
\newcommand{\charac}{\operatorname{char}} % characteristic of a field
|
\newcommand{\charac}{\operatorname{char}} % characteristic of a field
|
||||||
\newcommand{\st}{\ensuremath{\,:\,}} % Makes the colon in set-builder notation space properly
|
\newcommand{\st}{\ensuremath{\,:\,}} % Makes the colon in set-builder notation space properly
|
||||||
|
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
% These commands are for convenient notation for the concentration of measure theorem
|
||||||
|
\newcommand{\obdiam}{\operatorname{ObserDiam}}
|
||||||
|
\newcommand{\diam}{\operatorname{diam}}
|
||||||
|
|
||||||
|
|
||||||
%%%%%%%%%%%%%%%%%%%%%%
|
%%%%%%%%%%%%%%%%%%%%%%
|
||||||
% These commands create theorem-like environments.
|
% These commands create theorem-like environments.
|
||||||
\newtheorem{theorem}{Theorem}
|
\newtheorem{theorem}{Theorem}
|
||||||
@@ -90,7 +96,6 @@
|
|||||||
\frontmatter
|
\frontmatter
|
||||||
\maketitle
|
\maketitle
|
||||||
\tableofcontents
|
\tableofcontents
|
||||||
|
|
||||||
\mainmatter
|
\mainmatter
|
||||||
|
|
||||||
% Each chapter is in its own file and included as a subfile.
|
% Each chapter is in its own file and included as a subfile.
|
||||||
@@ -98,7 +103,7 @@
|
|||||||
\subfile{chapters/chap0}
|
\subfile{chapters/chap0}
|
||||||
\subfile{chapters/chap1}
|
\subfile{chapters/chap1}
|
||||||
\subfile{chapters/chap2}
|
\subfile{chapters/chap2}
|
||||||
\subfile{chapters/chap3}
|
% \subfile{chapters/chap3}
|
||||||
|
|
||||||
\backmatter
|
\backmatter
|
||||||
\cleardoublepage
|
\cleardoublepage
|
||||||
|
|||||||
Reference in New Issue
Block a user