Expectation values with the density matrix

Hello! Welcome back to our post series introducing the density matrix. In the previous post, the density matrix was introduced, and we saw how it serves as a tool for considering ensembles of spins where the ket formalism may break down. We then constructed the density matrix for some standard spin states. However, it may still be unclear how to extract observables from the density matrix, or how the density matrix evolves in time – both key concepts to actually using the density matrix to understand magnetic resonance experiments. In this post, we will see how the density matrix can be used to calculate expectation values (and thus extract observables), while the next post will explore time evolution in the density matrix formalism.

Matrix elements of the density matrix

Previously, we showed how to calculate the density matrix for a state \ket{\psi} by expressing \ket{\psi} as a column vector and \bra{\psi} as a row vector and taking the Kronecker product to express \hat{\rho} = \overline{\ket{\psi}\bra{\psi}} as a matrix. We can also calculate the matrix elements of \hat{\rho} by remembering that \hat{\rho} is an operator. As we recall, the matrix elements of an operator \hat{A} can be expressed as

(1)   \begin{equation*}     A_{ij} = \bra{i}\hat{A}\ket{j}\end{equation*}

where i and j represent the eigenstates of the basis we are using to express the operator. We can extend this definition to the density matrix, and get

(2)   \begin{equation*}     \rho_{ij} = \left\langle i \vert \psi \right\rangle \left\langle \psi \vert j \right\rangle\end{equation*}

From this form, we see that the matrix elements of the density matrix for a state \ket{\psi} are related to the projection of the different eigenstates of our basis onto the state \ket{\psi}.

Expectation values from the density matrix

With the form of the density matrix introduced in Eq. 2, we can now calculate the expectation value of an observable associated with operator \hat{A} using our density matrix formalism. First, we recall two things. One, the expectation value of an operator \hat{A} for a state \ket{\psi} is

(3)   \begin{equation*}     \langle\hat{A}\rangle = \bra{\psi}\hat{A}\ket{\psi}\end{equation*}

and two, we can express the identity as the summation over the outer product of each of our basis states

(4)   \begin{equation*}\hat{ \mathds{1}} = \sum_i \ket{i}\bra{i}\end{equation*}

We will then use that we can insert a copy (or two) of the identity into our equations without impacting the equalities (i.e. multiplying by 1 has no effect!).

(5)   \begin{align*} \langle\hat{A}\rangle = \bra{\psi}\hat{A}\ket{\psi} & = \sum_{i,j} \left\langle \psi \vert i \right\rangle \bra{i}\hat{A}\ket{j} \left\langle j \vert \psi \right\rangle \\ & = \sum_{i,j} A_{ij} \rho_{ji} \\ & = \textrm{Tr}(\hat{A} \hat{\rho})\end{align*}

In the above, we have used that inner products are complex numbers and there multiplication is commutative. We also have introduced the trace of a matrix, which is the sum of its diagonal elements. In our case, we are tracing over the matrix product of our observable operator \hat{A} and our density matrix \hat{\rho}. Since the trace is an easy value to calculate (simply a sum along the diagonal), Eq. 5 provides a straightforward epxression for using the density matrix to find expectation values.

There are multiple ways to show that \hat{\mathds{1}} = \sum_i \ket{i}\bra{i} is a valid equation. One approach is that we can calculate the outer product for each state expressed in vector notation. The n^\textrm{th} basis state, \ket{i_n}, can be expressed in vector notation as

(6)   \begin{equation*} \ket{i_n} = \begin{pmatrix} 0 \\ ... \\ 0 \\ 1 \\ 0 \\ ... \\ 0 \end{pmatrix}\end{equation*}

where we have a 1 on the n^\textrm{th} element and a 0 everywhere else. Then, it is clear that the outer product \ket{i_n}\bra{i_n} will be a matrix of all 0‘s, except for the n^\textrm{th} element of the diagonal, which will be a 1. Summing over all the n basis states then gives the matrix with 1‘s on the diagonal, and 0‘s for every other element. This matrix is the n-dimensional identity matrix, so we must have

(7)   \begin{equation*} \hat{\mathds{1}} = \sum_i \ket{i}\bra{i}\end{equation*}

Example calculations

We will now use the expression in Eq. 5 to calculate the expectation value of some example states and operators. We will consider the expectation values of the operator \hat{I}_x for the states \ket{\psi_1}=\ket{+z} = \ket{\alpha} and \ket{\psi_2}=\ket{-x}. First, we need to find the density matrix for each of these states, working in the \hat{I}_z basis as usual. For \ket{\psi_1}=\ket{+z}, we saw in the previous post that we have

(8)   \begin{equation*}    \hat{\rho}_1 = \ket{+z}\bra{+z} = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}\end{equation*}

while for \ket{\psi_2}=\ket{-x} we have

(9)   \begin{equation*} \hat{\rho}_2 = \ket{-x}\bra{-x} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix} \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & -1 \end{pmatrix} = \begin{pmatrix} 1/2 & -1/2 \\ -1/2 & 1/2 \end{pmatrix}\end{equation*}

As we have previously seen, we can express the operator \hat{I}_x in matrix form as

    \begin{equation*}\hat{I}_x = \begin{pmatrix} 0 & 1/2 \\ 1/2 & 0 \end{pmatrix}\end{equation*}

where we are using the convention \hbar = 1. Now, using Eq. 5, we can calculate the expectation value of \hat{I}_x for each state. For state \ket{\psi}_1, we get

(10)   \begin{equation*} \langle\hat{I}_x\rangle_1 = tr(\hat{\rho}_1 \hat{I}_x) = \textrm{Tr}\left( \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1/2 \\ 1/2 & 0 \end{pmatrix}\right) = \textrm{Tr}\left( \begin{pmatrix} 0 & 1/2 \\ 0 & 0 \end{pmatrix}\right) = 0\end{equation*}

which is indeed the expectation value of state \ket{\alpha} for the \hat{I}_x operator. For \ket{\psi}_2, we get

(11)   \begin{equation*}\langle\hat{I}_x\rangle_2 = \textrm{Tr}\left(\hat{\rho}_2 \hat{I}_x\right) = \textrm{Tr}\left( \begin{pmatrix} 1/2 & -1/2 \\ -1/2 & 1/2 \end{pmatrix} \begin{pmatrix} 0 & 1/2 \\ 1/2 & 0 \end{pmatrix}\right) = \textrm{Tr}\left(\begin{pmatrix} -1/4 & 1/4 \\ 1/4 & -1/4 \end{pmatrix}\right) = -1/2\end{equation*}

which again is the proper expectation value, this time for the state \ket{-x} (recall that we are using \hbar=1 and that the expectation value for this eigenstate is actually \frac{\hbar}{2}).

The same calculations are shown in the code snippet below using MATLAB and Python/Numpy.

  • MATLAB
  • Python
% Definition of kets
ket_1     = [1 ;0];
ket_2     = [1 ; -1]/sqrt(2);

% Corresponding density matrices
rho_1     = ket_1*ket_1';
rho_2     = ket_2*ket_2';

% Angular momentum operator
I_x       = [0 1; 1 0]/2;

% Expectation values
O_1       = trace(I_x*rho_1);
O_2       = trace(I_x*rho_2);
# Required package for numerical calculations
import numpy as np

# Definition of kets 
ket_1     = np.array([1, 0])
ket_2     = np.array([1, -1])/np.sqrt(2)

# Corresponding density matrices
rho_1     = np.outer(ket_1, ket_1)
rho_2     = np.outer(ket_2, ket_2)

# Angular momentum operator
I_x       = np.array([[0,  +1/2],[ +1/2, 0]])

# Expectation values
O_1       = np.trace(I_x@rho_1)
O_2       = np.trace(I_x@rho_2)

Conclusion

In this post, we have seen how to use the density matrix to calculate the expectation value of an operator by taking the trace of the product of the density matrix and the operator in question, and went through some simple example calculations. Now, the density matrix is a bit more useful, as we can use it to extract measurable values! In the next post, we will explore time evolution with the density matrix. Combined with what we learned here, we will then be able to use the density matrix formalism to simulate magnetic resonance experiments.

Leave a Reply
    Artist Credit:
    Yu SciVis & Art LLC (Dr. Chung-Jui Yu)
    Website designed and developed by:
    NetzOptimize Inc.
    © COPYRIGHT . QUANTUM-RESONANCE.ORG

    Quantum Insights