Browse Source

interference

main
Tom Krüger 1 year ago
parent
commit
bd230b3366
5 changed files with 58 additions and 2 deletions
  1. +45
    -2
      content.tex
  2. BIN
      figures/coin_flip_probabilistic.pdf
  3. +12
    -0
      figures/coin_flip_probabilistic.tex
  4. BIN
      main.pdf
  5. +1
    -0
      main.tex

+ 45
- 2
content.tex View File

@ -229,6 +229,9 @@ The final component that sill needs to be expressed in the framework of linear a
\end{itemize} \end{itemize}
\end{definition} \end{definition}
\subsection{Tensor Product}
\contentsketch{motivate, combining (state) vector spaces, stochastic matrices closed under kronecker product}
\section{Making it Quantum} \section{Making it Quantum}
\Cref{sec:stocastic_matrix_model} formulates mathematical tools to algebraically describe an abstract model of probabilistic computations defined in \cref{sec:probabilistic_model}. This section takes a reverse approach. The tools developed in \cref{sec:stocastic_matrix_model} are based on stochastic matrices, which is an obvious choice to model probabilistic state transitions. Unfortunately this model has some shortcomings. This section first highlights these inconveniences and then fixes them. By doing so the model will gain in computational power, demonstrated by the implementation of Deutsch's algorithm. Finally, it will be shown that this extended model is indeed physically realizable. \Cref{sec:stocastic_matrix_model} formulates mathematical tools to algebraically describe an abstract model of probabilistic computations defined in \cref{sec:probabilistic_model}. This section takes a reverse approach. The tools developed in \cref{sec:stocastic_matrix_model} are based on stochastic matrices, which is an obvious choice to model probabilistic state transitions. Unfortunately this model has some shortcomings. This section first highlights these inconveniences and then fixes them. By doing so the model will gain in computational power, demonstrated by the implementation of Deutsch's algorithm. Finally, it will be shown that this extended model is indeed physically realizable.
@ -291,8 +294,48 @@ The measurement operators of \cref{sec:measurements_probabilistic} obviously als
\subsubsection{(Real) Projective Measurements} \subsubsection{(Real) Projective Measurements}
One of the most important special cases of measurement operators are projective measurements. As the name already suggests, projective measurements are linear projections onto subspaces of $\mathcal{B}_{\R}^n$. One of the most important special cases of measurement operators are projective measurements. As the name already suggests, projective measurements are linear projections onto subspaces of $\mathcal{B}_{\R}^n$.
\subsection{Interference - Computational Power} \subsection{Interference - Computational Power}
\subsubsection{Flipping a Coin Twice Equals Doing Nothing}
So far, moving computations on affine combinations to points on the unit sphere had merely subjective and rather esoteric reasons to \emph{clean up} an abstract description of a computational model. In short: It's mathematically nicer to move around on the unit sphere. This section shows, that utilizing the power of probably amplitudes one actually gains computational power compared to the previous model.
\subsubsection{Reversing a Coin Flip}
%\begin{figure}
% \centering
% \includestandalone{figures/coin_flip_probabilistic}
%\end{figure}
In the classical probabilistic model a coin flip destroys any information stored in a bit, even in superposition states. It is easy to verify that $P_{\sfrac{1}{2}} = \frac{1}{2} \big(\begin{smallmatrix}1 & 1 \\ 1 & 1\end{smallmatrix}\big)$ indeed implements a 1-bit coin flip and satisfies the conditions of \cref{thm:probabilistic_matrix_representation}. Two consecutive coin flips are independent, which is illustrated by $P_{\sfrac{1}{2}} P_{\sfrac{1}{2}} = P_{\sfrac{1}{2}}$. The coin flip applied to an arbitrary superposition $\mathbf{b} = p_0 \mathbf{b}_0 + p_1 \mathbf{b}_1$ yields:
$$
P_{\sfrac{1}{2}} \mathbf{b} = p_0\frac{1}{2}(\mathbf{b}_0 + \mathbf{b}_1) + p_1 \frac{1}{2}(\mathbf{b}_0 + \mathbf{b}_1)
= (p_0 + p_1) \frac{1}{2}(\mathbf{b}_0 + \mathbf{b}_1) = \frac{1}{2}(\mathbf{b}_0 + \mathbf{b}_1)
$$
Given any input state $P_{\sfrac{1}{2}}$ reaches a fixed point after one iteration. With $P_{\sfrac{1}{2}}$ not being orthogonal a different operator is needed for the orthogonal model, which is the \emph{Hadamard} operator.
\begin{definition}[Hadamard Operator]
\label{def:hadamard_gate}
\begin{equation}
H = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & 1 \\
1 & -1 \\
\end{pmatrix}
\end{equation}
\end{definition}
The Hadamard operator is basically the same as $P_{\sfrac{1}{2}}$ but the global factor is adjusted for probability amplitudes and the matrix columns are made orthogonal by introducing a negative phase in the second column. Exactly this negative phase will have some surprisingly useful effects. It is easy to see that
$$
P\parens{H\ket{b} = \ket{0}} = P\parens{H\ket{b} = \ket{1}} = \bra{b}H^\dag M_{\ket{0}} H\ket{b} = \bra{b}H^\dag M_{\ket{1}} H\ket{b} = \frac{1}{2}
$$
for $\ket{b} \in \parensc{\ket{0}, \ket{1}}$, $M_{\ket{0}} = \ketbra{0}$ and $M_{\ket{1}} = \ketbra{1}$. So, $H$ does indeed implement a fair coin flip. But contrary to $P_{\sfrac{1}{2}}$ the Hadamard operator does not destroy the information stored in a superposition.
\begin{equation}
\label{eq:hadamard_on_superposition}
\begin{aligned}
H\parens*{a_0 \ket{0} + a_1 \ket{1}} &= \frac{a_0}{\sqrt{2}}\parens{\ket{0} + \ket{1}} + \frac{a_1}{\sqrt{2}}\parens{\ket{0} - \ket{1}} \\
&= \frac{1}{\sqrt{2}}\parens{(a_0 + a_1)\ket{0} + (a_0 - a_1)\ket{1}}
\end{aligned}
\end{equation}
Actually, applying $H$ a second time completely reverses the computation, this is the result of $HH = \idmat$ and might seem strange at first but is in fact a trivial consequence of orthogonal operators representing rotations and rotoreflections, which both are obviously reversible. Thus, the first $H$ can not have destroyed any information. It is interesting to look into how something like that happens. After the second $H$ application the state is:
\begin{multline}
\label{eq:2_hadamards_on_superposition}
\frac{1}{2}\parens{(a_0 + a_1)(\ket{0} + \ket{1}) + (a_0 - a_1)(\ket{0} - \ket{1})} \\
= \frac{1}{2}\parens{(a_0 + a_0 + a_1 - a_1)\ket{0} + (a_0 - a_0 + a_1 + a_1)\ket{1}} =
a_0 \ket{0} + a_1 \ket{1}
\end{multline}
The key observation can already be made in \cref{eq:hadamard_on_superposition}. Probability amplitudes can destructively interfere with each other. In \cref{eq:hadamard_on_superposition} this can be seen in the term $(a_0 - a_1)$ and in \cref{eq:2_hadamards_on_superposition} the amplitudes cancel each other out just perfectly to restore the original input state. It can't be mentioned enough: probability amplitudes are not probabilities. Destructive Interference is not possible with stochastic matrices from \cref{sec:stocastic_matrix_model} with all their entries being strictly positive. The next section shows how interference effects can be utilized effectively to outperform any probabilistic computation.
\subsubsection{Deutsch's Algorithm} \subsubsection{Deutsch's Algorithm}

BIN
figures/coin_flip_probabilistic.pdf View File


+ 12
- 0
figures/coin_flip_probabilistic.tex View File

@ -0,0 +1,12 @@
\documentclass{standalone}
\usepackage{tikz}
\usetikzlibrary{graphs, graphdrawing, graphdrawing.trees}
\begin{document}
\begin{tikzpicture}
\graph[tree layout]{
A -> {B , C}
};
\end{tikzpicture}
\end{document}

BIN
main.pdf View File


+ 1
- 0
main.tex View File

@ -5,6 +5,7 @@
\usepackage{lmodern} \usepackage{lmodern}
\usepackage{amsmath, amsthm, amssymb, amsfonts} \usepackage{amsmath, amsthm, amssymb, amsfonts}
\usepackage{xfrac}
\usepackage{dsfont} \usepackage{dsfont}
\usepackage[mathscr]{euscript} \usepackage[mathscr]{euscript}
\usepackage{mathtools} \usepackage{mathtools}


Loading…
Cancel
Save