Solutions

Quantum Mechanics - The Theoretical Minimum

Exercise list

Lecture 4

Exercise 4.1

Exercise 4.2

Exercise 4.3

Exercise 4.4

Exercise 4.5

Exercise 4.6

Lecture 5

Exercise 5.1

Exercise 5.2

Lecture 4

Exercise 4.1

Prove that if \mathbf U is unitary, and if | A \rangle and | B \rangle are any two state-vectors, then the inner product of \mathbf U| A \rangle and \mathbf U| B \rangle is the same as the inner product of | A \rangle and | B \rangle. One could call this the conservation of overlaps. It expresses the fact that the logical relation between states is preserved with time.

If \mathbf U is an unitary operator, \mathbf U^\dag = \mathbf U^T and \mathbf U^\dag\mathbf U = \mathbf I, then:

\overline{\mathbf U| \mathbf A \rangle} = \langle \mathbf A | \mathbf U^T

Therefore:

\langle \mathbf A | \mathbf U^T \mathbf U | \mathbf B \rangle = \langle \mathbf A | \mathbf I | \mathbf B \rangle = \langle \mathbf A | \mathbf B \rangle

Exercise 4.2

Prove that if \mathbf M and \mathbf L are both Hermitian, i[\mathbf M, \mathbf L] is also Hermitian. Note that the i is important. The commutator is, by itself, not Hermitian.

Taking the Hermitian conjugate and using the fact that the operators are Hermitian:

\begin{aligned} \left(i[\mathbf H, \mathbf L]\right)^\dag & = -i\left([\mathbf H, \mathbf L]\right)^\dag = -i\left(\mathbf H \, \mathbf L - \mathbf L \, \mathbf H\right)^\dag = -i\left(\mathbf L^\dag \, \mathbf H^\dag - \mathbf H^\dag \, \mathbf L^\dag \right) \\ & = -i\left(\mathbf L \, \mathbf H - \mathbf H \, \mathbf L \right) -i[\mathbf L, \mathbf H]= i[\mathbf H, \mathbf L] \end{aligned}

To note that [\mathbf H, \mathbf L] is not Hermitian as:

[\mathbf H, \mathbf L]^\dag = [\mathbf H, \mathbf L]^\dag = \left(\mathbf H \, \mathbf L - \mathbf L \, \mathbf H\right)^\dag = \left(\mathbf L^\dag \, \mathbf H^\dag - \mathbf H^\dag \, \mathbf L^\dag \right) = \left(\mathbf L \, \mathbf H - \mathbf H \, \mathbf L \right) = [\mathbf L, \mathbf H]= -[\mathbf H, \mathbf L]

Exercise 4.3

Go back to the definition of Poisson brackets in Volume I and check that the identification in Eq. 4.21 is dimensionally consistent. Show that without the factor \hbar, it would not be.

Given the equation [\mathbf{F},\mathbf{G}] \Longleftrightarrow i\hbar\{F,G\}, where [\mathbf{F},\mathbf{G}] is the commutator of two operators and \{F,G\} is the Poisson brackets of their corresponding classical analogs, we want to demonstrate the dimensional correctness of this equation and show the necessity of i\hbar for dimensional consistency.

  • The commutator [\mathbf{F},\mathbf{G}] has the same dimensions as the product of the dimensions of \mathbf{F} and \mathbf{G}. If we assume \mathbf{F} and \mathbf{G} are position (x) and momentum (p), respectively, then:

    • Dimension of x is [L] (length).
    • Dimension of p is [MLT^{-1}] (mass × velocity).
  • The Poisson brackets \{F,G\} for position and momentum has dimensions of the product of their derivatives with respect to themselves, which simplifies to dimensionless, because it’s essentially a ratio comparing changes in position and momentum.

  • The constant i\hbar has dimensions of action ([ML^2T^{-1}]), where \hbar (reduced Planck’s constant) provides the bridge between the classical and quantum descriptions. The imaginary unit i does not affect dimensions.

For the equation to be dimensionally consistent, both sides must have the same dimensions. Without i\hbar, the right side would be dimensionless, while the left side would retain the dimensions of the operators’ product, x \cdot p = [ML^2T^{-1}], which is incorrect.

By introducing i\hbar with dimensions [ML^2T^{-1}] to the right side, we ensure both sides of the equation have the same dimensions:

  • Left Side: [\mathbf{F},\mathbf{G}] = [ML^2T^{-1}]
  • Right Side: i\hbar\{F,G\} = [ML^2T^{-1}]\cdot[\text{dimensionless}] = [ML^2T^{-1}]

The inclusion of i\hbar is essential for dimensional consistency, demonstrating the meaningful comparison between the quantum mechanical commutator and the classical Poisson brackets when dealing with position and momentum.

Exercise 4.4

Verify the commutation relations of Eqs. 4.26.

It is possible to compute the commutation relationship for the spin using the Pauli matrices and remembering the definition of the commutator:

[\mathbf F,\mathbf G] = \mathbf F\,\mathbf G - \mathbf G\,\mathbf F

\sigma_x

\begin{aligned} [\sigma_x, \sigma_x] & = 0 \\ [\sigma_x, \sigma_y] & = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} - \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\\ & = \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix} - \begin{bmatrix} -i & 0 \\ 0 & -i \end{bmatrix} = \begin{bmatrix} 2i & 0 \\ 0 & 2i \end{bmatrix} = 2i \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = 2\,i\,\sigma_z \\ [\sigma_x, \sigma_z] & = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} - \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \\ & = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} -\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -2 \\ 2 & 0 \end{bmatrix} = 2i \begin{bmatrix} 0 & i \\ -i & 0 \end{bmatrix} = -2\,i\,\sigma_y \end{aligned}

\sigma_y

\begin{aligned} [\sigma_y, \sigma_x] & = - [\sigma_x, \sigma_y] = -2\,i\,\sigma_z \\ [\sigma_y, \sigma_y] & = 0 \\ [\sigma_y, \sigma_z] & = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} - \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \\ & = \begin{bmatrix} 0 & i \\ i & 0 \end{bmatrix} - \begin{bmatrix} 0 & -i \\ -i & 0 \end{bmatrix} = \begin{bmatrix} 0 & 2i \\ 2i & 0 \end{bmatrix} = 2\,i \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = 2\,i\,\sigma_x \end{aligned}

\sigma_z

\begin{aligned} [\sigma_z, \sigma_x] & = - [\sigma_x, \sigma_z] = 2\,i\,\sigma_y \\ [\sigma_z, \sigma_y] & = - [\sigma_y, \sigma_z] = -2\,i\,\sigma_x \\ [\sigma_z, \sigma_y] & = 0 \end{aligned}

Exercise 4.5

Take any unit 3-vector \vec{n} and form the operator:

\mathbf H = \frac{\hbar\omega}{2}\vec{\sigma}\cdot\vec{n}

Find the energy eigenvalues and eigenvectors by solving the time-independent Schrödinger equation. Recall that Eq. 3.23 gives \sigma\cdot\vec{n} in component form.

The operator:

\sigma_n = \begin{bmatrix} n_z & \left(n_x -i\,n_y \right) \\ \left(n_x + i\,n_y \right) & - n_z \end{bmatrix}

was already analyzed in Exercise 3.4, and the corresponding eigenvalues and eigenvectors where already identified:

\begin{array}{lll} \lambda_1 = 1 &\quad & | v_1 \rangle = \begin{bmatrix} \cos\left(\tfrac{\theta}{2}\right) \\[12pt] e^{i\,\phi} \sin\left(\tfrac{\theta}{2}\right) \end{bmatrix} \\[20pt] \lambda_2 = -1 &\quad & | v_2 \rangle = \begin{bmatrix} -\sin\left(\tfrac{\theta}{2}\right) \\[12pt] e^{i\,\phi}\cos\left(\tfrac{\theta}{2}\right) \end{bmatrix} \end{array}

For a generic matrix, multiplying the eigenequation for a generic constant K on both sides:

\begin{aligned} & K \mathbf A \mathbf v = K\lambda\mathbf v \\ & \mathbf B \mathbf v = K\lambda\mathbf v \end{aligned}

For the matrix \mathbf B the eigenvalues are now the original eigenvalues multiplied by K, and the eigenvector are unchanged. Therefore, using K = \frac{\hbar\omega}{2}:

\begin{array}{lll} \lambda_1 = \dfrac{\hbar\omega}{2} &\quad & | v_1 \rangle = \begin{bmatrix} \cos\left(\tfrac{\theta}{2}\right) \\[12pt] e^{i\,\phi} \sin\left(\tfrac{\theta}{2}\right) \end{bmatrix} \\[20pt] \lambda_2 = -\dfrac{\hbar\omega}{2} &\quad & | v_2 \rangle = \begin{bmatrix} -\sin\left(\tfrac{\theta}{2}\right) \\[12pt] e^{i\,\phi}\cos\left(\tfrac{\theta}{2}\right) \end{bmatrix} \end{array}

Exercise 4.6

Carry out the Schrödinger Ket recipe for a single spin. The Hamiltonian is H = \frac{\omega\hbar}{2}\sigma_z and the final observable is \sigma_x. The initial state is given as | u \rangle (the state in which \sigma_z = +1).

After time t, an experiment is done to measure \sigma_y. What are the possible outcomes and what are the probabilities for those outcomes?

Congratulations! You have now solved a real quantum mechanics problem for an experiment that can actually be carried out in the laboratory. Feel free to pat yourself on the back.

  1. Get an Hamiltonian operator \mathbf H. The Hamiltonian for this system is:

\mathbf H = \frac{\hbar \, \omega}{2}\sigma_z = \frac{\hbar \, \omega}{2} \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}

  1. Prepare an initial state |\Psi(0)\rangle. The initial state is the state | u \rangle (The state in which \sigma_z = +1):

|\Psi(0) \rangle = | u \rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}

  1. Find the eigenvalues and eigenvector of \mathbf H by solving the time-independent Schrödinger equation

\begin{aligned} & \mathbf H | e_i \rangle = E_i | e_i \rangle \\ & \frac{\hbar \, \omega}{2} \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}| e_i \rangle = E_i | e_i \rangle \end{aligned}

Starting from the eigenvalues:

\begin{aligned} & | \mathbf H - \lambda \mathbf I | = \begin{vmatrix} \tfrac{\hbar \, \omega}{2} -\lambda & 0 \\ 0 & -\tfrac{\hbar \, \omega}{2} -\lambda \end{vmatrix} = (\frac{\hbar \, \omega}{2} -\lambda)(\frac{\hbar \, \omega}{2} -\lambda) = 0 \\ & \lambda_{1,2} = \pm \frac{\hbar \, \omega}{2} \end{aligned}

Since \mathbf H is diagonal, compute the eigenvectors is simple. For \lambda_1 = \frac{\hbar\,\omega}{2}:

\begin{aligned} & (\mathbf H - \lambda_1 \mathbf I) | e_1 \rangle = \begin{bmatrix} 0 & 0 \\ 0 & -\tfrac{2\hbar \, \omega}{2} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & x_2 = 0 \\ & | e_1 \rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \end{aligned}

For \lambda_2 = -\frac{\hbar\,\omega}{2}:

\begin{aligned} & (\mathbf H - \lambda_2 \mathbf I) | e_1 \rangle = \begin{bmatrix} \tfrac{2\hbar \, \omega}{2} & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & x_1 = 0 \\ & | e_2 \rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \end{aligned}

Both eigenvectors are orthogonal and of unit length.

  1. Using the initial state vector |\Psi(0)\rangle along with the eigenvectors |e_i\rangle, calculate the initial coefficient \psi_{1,2}(0):

\begin{aligned} & \psi_1(0) = \langle e_1 | \Psi(0) \rangle = \langle e_1 | u \rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 1 \\ & \psi_2(0) = \langle e_2 | \Psi(0) \rangle = \langle e_2 | u \rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 0 \end{aligned}

  1. Rewrite |\Psi(0) \rangle in terms of the eigenvectors |e_{1,2}\rangle and the initial coefficients \psi_{1,2}(0)

|\Psi(0) \rangle = \sum_{i=1}^2 \psi_i(0) | e_i \rangle = 1 \,e_1 + 0 \, e_2 = e_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}

  1. Rewrite the equation replacing t=0 with a generic t to capture time-dependence:

|\Psi(t) \rangle = \sum_{i=1}^2 \psi_i(t) | e_i \rangle = \psi_1(t) | e_1 \rangle + \psi_2(t) | e_2 \rangle = \psi_1(t) \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \psi_2(t) \begin{bmatrix} 0 \\ 1 \end{bmatrix}

  1. Replace \psi_i(t) with \psi_{i}(0) e^{- \frac{i}{\hbar} E_i t}:

\begin{aligned} |\Psi(t) \rangle & = \sum_{i=1}^2 \psi_{i} (0) e^{- \frac{i}{\hbar} E_i t} | e_i \rangle = \psi_{1}(0) e^{- \frac{i}{\hbar} E_1 t} | e_1 \rangle + \psi_{2}(0) e^{- \frac{i}{\hbar} E_2 t} | e_2 \rangle \\ & = 1 e^{- \frac{i}{\hbar} \frac{\hbar \, \omega}{2} t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} + 0 e^{\frac{i}{\hbar} \frac{\hbar \, \omega}{2} t} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = e^{- \frac{i \, \omega}{2} t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \end{aligned}

It is possible to compute after a certain period t another observable, for example \sigma_y and compute its probabilities. Given the observable \sigma_y:

\sigma_y = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}

  1. Compute the eigenvalues of the observable:

\begin{aligned} & | \sigma_y - \lambda \mathbf I | = \begin{vmatrix} -\lambda & -i \\ i & -\lambda \end{vmatrix} = - \lambda^2 +1 = 0 \\ & \lambda_{1,2} = \pm 1 \end{aligned}

  1. Compute the eigenvectors of the observable ensuring they have unitary length. For \lambda_1 = +1:

\begin{aligned} & (\sigma_y - \lambda_1 \mathbf I) | v_1 \rangle = \begin{bmatrix} -1 & -i \\ i & -1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & i\,x_1 - x_2 = 0 \\ & | v_1' \rangle = \begin{bmatrix} 1 \\ i \end{bmatrix} \\ & | v_1'| = \sqrt{\langle v_1' | v_1' \rangle} = \sqrt{\big \langle \begin{bmatrix} 1 \\ -i \end{bmatrix} \big | \begin{bmatrix} 1 \\ i \end{bmatrix} \big \rangle} = \sqrt 2 \\ & | v_1 \rangle = \frac{v_1'}{| v_1'|} = \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{i}{\sqrt 2} \end{bmatrix} = | i \rangle \end{aligned}

For \lambda_2 = -1:

\begin{aligned} & (\sigma_y - \lambda_2 \mathbf I) | v_2 \rangle = \begin{bmatrix} 1 & -i \\ i & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & i\,x_1 + x_2 = 0 \\ & | v_2 \rangle = \begin{bmatrix} 1 \\ -i \end{bmatrix} \\ & | v_2'| = \sqrt{\langle v_2' | v_2' \rangle} = \sqrt{\big \langle \begin{bmatrix} 1 \\ i \end{bmatrix} \big | \begin{bmatrix} 1 \\ -i \end{bmatrix}\big \rangle} = \sqrt 2 \\ & | v_2 \rangle = \frac{v_2'}{| v_2'|} = \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{-i}{\sqrt 2} \end{bmatrix} = | o \rangle \end{aligned}

As it could have been expected, the eigenvector are | i \rangle and | o \rangle.

  1. Compute the probability for the outcome \lambda_i:

\begin{array}{lll} P_{\lambda_1}(t) & = P_1(t) & = | \langle v_1 | \Psi(t) \rangle |^2 = \big \langle e^{\frac{i \, \omega}{2} t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \big | \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{-i}{\sqrt 2} \end{bmatrix} \big \rangle \big \langle \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{i}{\sqrt 2} \end{bmatrix} \big | e^{- \frac{i \, \omega}{2} t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \big \rangle \\ & & = e^{\frac{i \, \omega}{2}} e^{- \frac{i \, \omega}{2}} \big \langle \begin{bmatrix} \frac{1}{\sqrt 2} \\ 0 \end{bmatrix} \big | \begin{bmatrix} \frac{1}{\sqrt 2} \\ 0 \end{bmatrix} \big \rangle = \dfrac{1}{2} \\ P_{\lambda_2}(t) & = P_{-1}(t) & = | \langle v_2 | \Psi(t) \rangle |^2 = \big \langle e^{\frac{i \, \omega}{2} t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \big | \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{i}{\sqrt 2} \end{bmatrix} \big \rangle \big \langle \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{-i}{\sqrt 2} \end{bmatrix} \big | e^{- \frac{i \, \omega}{2} t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \big \rangle \\ & & = e^{\frac{i \, \omega}{2}} e^{- \frac{i \, \omega}{2}} \big \langle \begin{bmatrix} \frac{1}{\sqrt 2} \\ 0 \end{bmatrix} \big | \begin{bmatrix} \frac{1}{\sqrt 2} \\ 0 \end{bmatrix} \big \rangle = \dfrac{1}{2} \end{array}

As additional example, it is possible to compute the same for \sigma_x:

\sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}

  1. Compute the eigenvalues of the observable:

\begin{aligned} & | \sigma_x - \lambda \mathbf I | = \begin{vmatrix} -\mu & 1 \\ 1 & -\lambda \end{vmatrix} = - \mu^2 +1 = 0 \\ & \mu_{1,2} = \pm 1 \end{aligned}

  1. Compute the eigenvectors of the observable ensuring they have unitary length. For \mu_1 = +1:

\begin{aligned} & (\sigma_x - \mu_1 \mathbf I) | u_1 \rangle = \begin{bmatrix} -1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & x_1 - x_2 = 0 \\ & | u_1' \rangle = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \\ & | u_1'| = \sqrt{\langle u_1' | u_1' \rangle} = \sqrt{\big \langle \begin{bmatrix} 1 \\ 1 \end{bmatrix} \big | \begin{bmatrix} 1 \\ 1 \end{bmatrix} \big \rangle} = \sqrt 2 \\ & | u_1 \rangle = \frac{u_1'}{| u_1'|} = \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{1}{\sqrt 2} \end{bmatrix} = | r \rangle \end{aligned}

For \mu_2 = -1:

\begin{aligned} & (\sigma_x - \mu_2 \mathbf I) | v_2 \rangle = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & x_1 + x_2 = 0 \\ & | u_2 \rangle = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \\ & | u_2'| = \sqrt{\langle u_2' | u_2' \rangle} = \sqrt{\big \langle \begin{bmatrix} 1 \\ -1 \end{bmatrix} \big | \begin{bmatrix} 1 \\ -1 \end{bmatrix}\big \rangle} = \sqrt 2 \\ & | u_2 \rangle = \frac{u_2'}{| u_2'|} = \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{-1}{\sqrt 2} \end{bmatrix} = | l \rangle \end{aligned}

As it could have been expected, the eigenvector are | r \rangle and | l \rangle.

  1. Compute the probability for the outcome \mu_i:

\begin{array}{lll} P_{\mu_1}(t) & = P_1(t) & = | \langle u_1 | \Psi(t) \rangle |^2 = \big \langle e^{\frac{i \, \omega}{2} t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \big | \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{1}{\sqrt 2} \end{bmatrix} \big \rangle \big \langle \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{1}{\sqrt 2} \end{bmatrix} \big | e^{- \frac{i \, \omega}{2} t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \big \rangle \\ & & = e^{\frac{i \, \omega}{2}} e^{- \frac{i \, \omega}{2}} \big \langle \begin{bmatrix} \frac{1}{\sqrt 2} \\ 0 \end{bmatrix} \big | \begin{bmatrix} \frac{1}{\sqrt 2} \\ 0 \end{bmatrix} \big \rangle = \dfrac{1}{2} \\ P_{\mu_2}(t) & = P_{-1}(t) & = | \langle u_2 | \Psi(t) \rangle |^2 = \big \langle e^{\frac{i \, \omega}{2} t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \big | \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{-1}{\sqrt 2} \end{bmatrix} \big \rangle \big \langle \begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{-1}{\sqrt 2} \end{bmatrix} \big | e^{- \frac{i \, \omega}{2} t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \big \rangle \\ & & = e^{\frac{i \, \omega}{2}} e^{- \frac{i \, \omega}{2}} \big \langle \begin{bmatrix} \frac{1}{\sqrt 2} \\ 0 \end{bmatrix} \big | \begin{bmatrix} \frac{1}{\sqrt 2} \\ 0 \end{bmatrix} \big \rangle = \dfrac{1}{2} \end{array}

Lecture 5

Exercise 5.1

Verify this claim.

The solution will be split in two parts, the first given the space of 2 \times 2 complex matrices M_{2\times2}(\mathbb{C}), the objective to show that \{I, \sigma_1, \sigma_2, \sigma_3\}, where I is the identity matrix and \sigma_i are the Pauli matrices, form a basis. Then for the specific case of Hermitian the coefficient are real.

The identity matrix is:

\mathbf = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}

The Pauli matrices are:

\sigma_1 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \sigma_2 = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}, \sigma_3 = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}

There are different way to proof it, one elegant is to showing that \{I, \sigma_1, \sigma_2, \sigma_3, iI, i\sigma_1, i\sigma_2, i\sigma_3\} form a basis in M_{2\times2}(\mathbb{C}) using an isomorphism with \mathbb{R}^8. This approach directly handles the assertion regarding linear independence and basis dimensionality. A complex 2 \times 2 matrix can be represented as:

\mathbf A = \begin{bmatrix} x + yi & u + vi \\ w + zi & s + ti \end{bmatrix}

where x, y, u, v, w, z, s, t \in \mathbb{R} and i = \sqrt{-1}. We establish an isomorphism \phi: M_{2\times2}(\mathbb{C}) \to \mathbb{R}^8 by mapping \mathbf A to a vector in \mathbb{R}^8 based on the real and imaginary parts of each entry in \mathbf A:

\phi(\mathbf A) = (x, y, u, v, w, z, s, t)

Considering the 4 matrices written as vectors and creating a basis for both the real and the complex part:

\begin{aligned} & I \to (1, 0, 0, 0, 0, 0, 1, 0)^T \\ & \sigma_1 \to (0, 0, 1, 0, 1, 0, 0, 0)^T \\ & \sigma_2 \to (0, 0, 0, -1, 0, 1, 0, 0)^T \\ & \sigma_3 \to (1, 0, 0, 0, -1, 0, 0, 0)^T \\ & iI \to (0, 1, 0, 0, 0, 0, 0, 1)^T \\ & i\sigma_1 \to (0, 0, 0, 1, 0, 0, 1, 0)^T \\ & i\sigma_2 \to (0, 0, 1, 0, 0, -1, 0, 0)^T \\ & i\sigma_3 \to (0, 1, 0, 0, 0, 0, -1, 0)^T \end{aligned}

These vectors map the complex matrices to \mathbb{R}^8, ensuring each component corresponds to real and imaginary parts of the matrices. This set forms a basis in \mathbb{R}^8 as a real vector space, showing the direct correspondence between the basis of complex 2 \times 2 matrices and vectors in real 8-dimensional space. Once the isomorphism is completed, it is trivial to demonstrate that these vector form a basis; they are linearly independent (they cannot be expressed as sum of other two vectors of the same basis) and they span \mathbb R^8.

Let’s assume each vector \mathbf{v}_i can be represented as:

\mathbf{v}_i = (a_{i1}, a_{i2}, a_{i3}, a_{i4}, a_{i5}, a_{i6}, a_{i7}, a_{i8})^T

where a_{ij} represents the j^{th} component of the i^{th} vector, and i, j \in \{1, 2, ..., 8\}.

Now, let’s assume there exists a 9^{th} vector, \mathbf{v}_9, and we aim to investigate whether it can be independent of the set \mathbf V:

\mathbf{v}_9 = (a_{91}, a_{92}, a_{93}, a_{94}, a_{95}, a_{96}, a_{97}, a_{98})^T

To determine if \mathbf{v}_9 is linearly independent from the vectors in V, we see if there exists a non-trivial solution to the equation:

c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + c_3\mathbf{v}_3 + c_4\mathbf{v}_4 + c_5\mathbf{v}_5 + c_6\mathbf{v}_6 + c_7\mathbf{v}_7 + c_8\mathbf{v}_8 - c_9\mathbf{v}_9 = \mathbf{0}

Where c_1, c_2, ..., c_9 are coefficients, and \mathbf{0} is the zero vector in \mathbb{R}^8.

For \mathbf{v}_9 to be linearly independent of V, the only solution to this equation should be c_1 = c_2 = ... = c_9 = 0. However, because we are working in \mathbb{R}^8, the Fundamental Theorem of Linear Algebra tells us that any set of more than 8 vectors in \mathbb{R}^8 must be linearly dependent. This means there must exist coefficients c_1, c_2, ..., c_9, not all zero, such that the above equation holds.

This directly leads to the conclusion that \mathbf{v}_9 can be expressed as a linear combination of the vectors in V:

\mathbf{v}_9 = \frac{c_1}{c_9}\mathbf{v}_1 + \frac{c_2}{c_9}\mathbf{v}_2 + \frac{c_3}{c_9}\mathbf{v}_3 + \frac{c_4}{c_9}\mathbf{v}_4 + \frac{c_5}{c_9}\mathbf{v}_5 + \frac{c_6}{c_9}\mathbf{v}_6 + \frac{c_7}{c_9}\mathbf{v}_7 + \frac{c_8}{c_9}\mathbf{v}_8

This demonstrates that \mathbf{v}_9 is not linearly independent from the vectors in V, proving that \mathbb{R}^8 has only 8 linearly independent vectors, and any additional vector must be linearly dependent on these.

The second part of the exercise is two show that, if \mathbf L is a 2 \times 2 Hermitian matrix, in the decomposition:

\mathbf L = a\sigma_x + b\sigma_y + c\sigma_z + d \mathbf I a, b, c, d are real.

To demonstrate that any Hermitian 2 \times 2 matrix \mathbf{L} can be represented as

\mathbf{L} = a \sigma_x + b \sigma_y + c \sigma_z + d \mathbf{I},

consider the Hermitian matrix \mathbf{L} having the form:

\mathbf{L} = \begin{bmatrix} x & y + iw \\ y - iw & z \end{bmatrix},

where x,z \in \mathbb R, and (y + iw)\in \mathbb C. Matching with:

\begin{bmatrix} x & y + iw \\ y - iw & z \end{bmatrix} = \begin{bmatrix} 0 & a \\ a & 0 \end{bmatrix} + \begin{bmatrix} 0 & -ib \\ ib & 0 \end{bmatrix} + \begin{bmatrix} c & 0 \\ 0 & -c \end{bmatrix} + \begin{bmatrix} d & 0 \\ 0 & d \end{bmatrix} = \begin{bmatrix} c+d & a-ib \\ a+ib & d-c \end{bmatrix},

Solve for a, b, c, and d:

a = \Re(y + iw) = y, \quad b = -\Im(y + iw) = -w, \quad c = \frac{x - z}{2}, \quad d = \frac{x + z}{2},

thus achieving the desired expression for \mathbf{L} with all real coefficients.

Exercise 5.2

1 - Show that \Delta \mathbf A^2 = \langle \bar{\mathbf A}^2 \rangle and \Delta \mathbf B^2 = \langle \bar{\mathbf B}^2 \rangle.

2 - Show that [\bar{\mathbf A}, \bar{\mathbf B}] = [\mathbf A, \mathbf B].

3 - Using these relations, show that

\Delta \mathbf A\, \Delta \mathbf B \geq \frac{1}{2} \langle \Psi | [\mathbf A, \mathbf B] |\Psi\rangle

For the first part, subtracting from \mathbf A its expectation value, defining \mathbf{ \bar A} as:

\mathbf {\bar A} = \mathbf A - \langle \mathbf A \rangle = \mathbf A - \langle \mathbf A \rangle \mathbf I

The probability distribution of \mathbf{ \bar A} is the same distribution as \mathbf A except that is shifted so the average is zero; the eigenvectors of \mathbf{ \bar A} are the same as of \mathbf A and the eigenvalues are shifted so that their average is zero as well:

\tilde a = a - \langle \mathbf A \rangle

The square of the uncertainty (or standard deviation) of \mathbf A, (\Delta \mathbf A)^2 is defined by:

(\Delta \mathbf A)^2 = \sum_a \bar a^2 P(a) = \sum_a (a - \langle \mathbf A \rangle)^2 P(a)

Another equivalent way to write it is:

(\Delta \mathbf A)^2 = \langle \Psi | \mathbf {\bar A}^2 | \Psi \rangle

If the expectation value of \mathbf A is zero, then it takes the simpler form:

(\Delta \mathbf A)^2 = \langle \Psi | \mathbf A^2 | \Psi \rangle

In other words, the square of the uncertainty is the average value of the operator \mathbf A^2, \langle \mathbf A^2 \rangle.

For \mathbf B, the procedure is exactly the same.

For the second part:

\begin{aligned} [\bar{\textbf A },\bar{\textbf B}] &= \bar{\textbf A}\bar{\textbf B} - \bar{\textbf B}\bar{\textbf A} \\ & = \left(\textbf A-\langle \textbf {A} \rangle\right) \left(\textbf B-\langle \textbf {B} \rangle\right) - \left(\textbf B-\langle \textbf {B} \rangle\right) \left(\textbf A-\langle \textbf {A} \rangle\right) \\ &= \left( \textbf A\textbf B - \textbf A\langle \textbf {B} \rangle- \langle \textbf {A} \rangle\textbf B+ \langle \textbf {A} \rangle\langle \textbf {B} \rangle \right)- \left( \textbf B\textbf A - \textbf B\langle \textbf {A} \rangle- \langle \textbf {B} \rangle\textbf A+ \langle \textbf {B} \rangle\langle \textbf {A} \rangle \right)\\ &= \textbf A\textbf B - \textbf B\textbf A \\&= [\textbf A,\textbf B]\end{aligned}

For the third part, defining:

\begin{aligned} &| \mathbf X \rangle = \bar{\mathbf A} | \Psi \rangle \\ &| \mathbf Y \rangle = i\bar{\mathbf B} | \Psi \rangle \\ \end{aligned}

and using the Cauchy-Schwarz inequality:

2| \mathbf X | | \mathbf Y | \ge | \langle \mathbf X | \mathbf Y \rangle + \langle \mathbf Y | \mathbf X \rangle |

Using the defined vectors:

2\sqrt {\langle \bar{\mathbf A} \rangle^2 \langle \bar{\mathbf B} \rangle^2} \ge |i \langle \Psi | \bar {\mathbf A} \bar {\mathbf B} | \Psi \rangle - i \langle \Psi | \bar {\mathbf A} \bar {\mathbf B} | \Psi \rangle |

the minus sign is due to the complex conjugation of i which is then dropped as taking the modulus. Finally:

\begin{aligned} & 2\sqrt {\langle \bar{\mathbf A} \rangle^2 \langle \bar{\mathbf B} \rangle^2} \ge |i \langle \Psi | \bar {\mathbf A} \bar {\mathbf B} | \Psi \rangle - i \langle \Psi | \bar {\mathbf A} \bar {\mathbf B} | \Psi \rangle |\\ & 2 \Delta \mathbf A \Delta \mathbf B \ge | \langle \Psi | [\bar {\mathbf A} \bar {\mathbf B}] | \Psi \rangle | \\ & \Delta \mathbf A \Delta \mathbf B \ge \frac{1}{2}| \langle \Psi | [\bar {\mathbf A} \bar {\mathbf B} ]| \Psi \rangle |\\ & \Delta \mathbf A \Delta \mathbf B \ge \frac{1}{2}| \langle \Psi | [\mathbf A \mathbf B] | \Psi \rangle | \end{aligned}

Go to the top of the page