@ -217,9 +217,11 @@ It follows directly from \cref{eq:exp_state_single_bit} that $\ptrans : \spanspa
\end{remark}
\subsection{Measurements}
\label{sec:measurements_probabilistic}
The final component that sill needs to be expressed in the framework of linear algebra are measurements. Let's go back and think about what measuring actually means in our case. The computational model described in \cref{sec:probabilistic_model} provides a macroscopic view of randomized computations. The result of such a randomized computation will be a random state. Usually it is only of interest if a computation outputs a desired state given a specific input, which entails the correctness of said computation. For randomized computations, such an analysis requires the final random state distribution. Superposition states are exactly that. Asking "How likely is it to end up in state $\mathbf{k}$?" corresponds to measuring the quotient of $\mathbf{k}$ in the final superposition $\mathbf{b}$. In the framework of linear algebra this means calculating the scalar product $\parens{\mathbf{k}\:.\:\mathbf{b}}$.
\begin{definition}
\label{def:measurment_operator_probabilistic}
Let $\mathbf{b}\coloneqq\sum_{i=1}^N p_i \mathbf{b}_i \in\mathbf{B}^n$ with basis states $\parensc*{\mathbf{b}_i}_{i=1}^N$, then there exist $N$ operators $\hat{M}_k : \mathbf{B}^n \to[0,1]$
\begin{itemize}
\item in state space: $\hat{M}_k\parens{\mathbf{b}}=\parens*{\mathbf{b}_k \:.\:\mathbf{b}}= p_k$
@ -272,9 +274,25 @@ The set of operations mapping $\mathscr{B}_{\R}^n$ to $\mathscr{B}_{\R}^n$ is ex
\begin{definition}{Orthogonal Computations}
A computation on the state space $\mathcal{B}_{\R}^n$ is defined by an orthogonal matrix $A \in\R^N$ with $N =2^n$.
\end{definition}
What does it mean if a matrix is orthogonal? Let $A =(a_{ij})=(\mathbf{a}_i,\dots,\mathbf{a}_n)\in\R^{(n,n)}$ be orthogonal with. Then, it follows directly from $AA^t =(b_{ij})=\idmat$ that $b_{ij}=\mathbf{a}_i^t \mathbf{a}_j =\delta_{ij}$. Hence, the columns (and rows) of $A$ form an orthonormal basis of $\R^n$. It is also easy to check that $A$ preserves the dot product making it angle and length preserving. Another direct consequence of $AA^t =\idmat$ is the reversibility of orthogonal computations.
What does it mean if a matrix is orthogonal? Let $A =(a_{ij})=(\mathbf{a}_i,\dots,\mathbf{a}_n)\in\R^{(n,n)}$ be orthogonal with. Then, it follows directly from $AA^t =(b_{ij})=\idmat$ that $b_{ij}=\mathbf{a}_i^t \mathbf{a}_j =\delta_{ij}$. Hence, the columns (and rows) of $A$ form an orthonormal basis of $\R^n$. It is also easy to check that $A$ preserves the dot product making it angle and length preserving. Another direct consequence of $AA^t =\idmat$ is the reversibility of orthogonal computations.
\subsection{Measurements}
The measurement operators of \cref{sec:measurements_probabilistic} obviously also need to be adjusted to the new state space and also suffered from some shortcomings. The dot product operators of \cref{def:measurment_operator_probabilistic} completely leave the state space when applied to a state vector. But the most important reason for why to redesign measurements are the newly used probably amplitudes. Just extracting an amplitude and squaring it would of course be a possible, but alien solution to the linear framework developed so far. This is because squaring is not linear. The desired goal is to design a linear operator nicely fitting in the framework at hand.
\begin{definition}[(Real) Measurement Operators]
Let $\Omega$ be the set of possible outcomes of an experiment. Then the corresponding measurement is described by a set of linear operators $\parensc{M_m}_{m \in\Omega}\subset\R^{(N,N)}$ with:
One of the most important special cases of measurement operators are projective measurements. As the name already suggests, projective measurements are linear projections onto subspaces of $\mathcal{B}_{\R}^n$.
\subsection{Interference - Computational Power}
\subsubsection{Flipping a Coin Twice Equals Doing Nothing}
\subsubsection{Deutsch's Algorithm}
\contentsketch{amplitudes not probabilities}
\contentsketch[caption={length preserving transition matrix}]{Transition matrix is not length preserving. Length preserving matrix: orthogonal matrix -> negative coefficients -> interference -> Deutsch's algorithm (new computational power)}