Harmonic Oscillator#
Author:
Date:
Time spent on this assignment:
Our goal in this assignment is to look at and think about the simple Harmonic oscillator.
Show code cell content
import numpy as np
import scipy
import matplotlib.pyplot as plt
import math
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
import numpy.polynomial.hermite as Herm
import math
import scipy.optimize
import matplotlib.animation as animation
from IPython.display import HTML
def resetMe(keepList=[]):
ll=%who_ls
keepList=keepList+['FuncAnimation','Herm','HTML','resetMe','scipy','np','plt','math','jax','jnp','jit','grad','HTML','animation','animateMe_singlePendula']
for iiii in keepList:
if iiii in ll:
ll.remove(iiii)
for iiii in ll:
jjjj="^"+iiii+"$"
%reset_selective -f {jjjj}
ll=%who_ls
plt.rcParams.update({"font.size": 14})
return
resetMe()
import datetime;datetime.datetime.now()
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[1], line 2
1 import numpy as np
----> 2 import scipy
3 import matplotlib.pyplot as plt
4 import math
ModuleNotFoundError: No module named 'scipy'
In this assignment, we will consistently use grids and observables that we will set up in this way. This is very similar to how you set things for the particle in the box assignment.
def SetupGrid(L,delta_x):
ns=np.array(range(0,int((L+2*delta_x)//delta_x)))
xs=ns*delta_x-L/2
return xs
def SetupObservables(xs,delta_x):
X=np.diag(xs)
P=(np.diag([1.j/(2*delta_x) for i in range(len(xs)-1)],k=1)+np.diag([-1.j/(2*delta_x) for i in range(len(xs)-1)],k=-1))
P2=np.zeros_like(P)
for i in range(len(xs)):
P2[i,i]=2.0/delta_x**2
if i+1<len(xs):
P2[i,i+1]=-1.0/delta_x**2
P2[i+1,i]=-1.0/delta_x**2
return X,P,P2
def update(frame, max_value,skip=1):
plt.cla() # Clear the current plot
plt.plot(xs, np.abs(arrays[::skip][frame])**2) # Plot the current array
plt.axvline(arrays[::skip][frame]@X@arrays[::skip][frame].T.conjugate())
plt.ylim(0, max_value) # Set the y-axis limit
plt.xlabel('x')
plt.ylabel('Value')
plt.title(f'Frame {skip*frame+1}/{len(arrays)}') # Display the frame number
Exercise 1. The Uncertainty Principle#
In this exercise, we’d like to think about the uncertainty principle.
a. The Fourier Basis#
The uncertainty principle tells us that you can’t simultaneously know a particle’s position and momentum. It’s typically quantified as
Let’s think for a moment about what it means to be certain or uncertain about a particles position. If you are uncertain about the position of a particle, what this means is that if you measure that particle many times, you get very different answers for the \(x\) position of a particle. If you are certain about the position of a particle this means that most measurements come out concentrated in a particular region so even before measuring, we could feel reasonably certain where the particle is in \(x\).
Here is an example of a wave-function which you are reasonably certain about it’s position:
where \(N\) is the normalization and \(\alpha=100\).
After setting up your grid and observables:
delta_x=0.1
xs=SetupGrid(20,delta_x)
X,P,P2=SetupObservables(xs,delta_x)
go ahead and plot the probability (absolute value squared of the wave-function) of getting different \(x\) values for this wave-function rememembering to label the x-axis as plt.xlabel("x")
and the y-axis as plt.ylabel("probability")
. From this plot, you should be able to see that if you measure \(x\) over and over again if will always be getting an answer somewhat around 4.
Show code cell content
# answer here
We are now hoping that there’s also a pretty narrow range of where the particle can be in momentum \(P\) - i.e. if we measure the momentum over and over again would we get essentially the same value. It’s hard to look at our plot and figure this out at the moment. That’s because the wave-function is currently plotted in the x-basis. We could also plot it in the momentum basis. If we did that, then we could look at the wave-function squared and see how spread out it is in momentum (and so whether I would expect to get the same momentum if I measured it).
So we’d like to replot the wave-function as a function of \(p\) and not \(x\). Mind you, this is the same wave-function. It’s just a different way of representing it. If I show you a curve and I don’t tell label the axis so you know whether you are in the \(x\) basis or \(p\) basis, you don’t really know what you’re looking at.
To change a basis you mutiply by a unitary matrix that corresponds to that basis change. We need the unitary matrix that multiplies a wave-function in the x-basis and converts it to the p-basis. Then we can plot it in the p-basis. You can get this unitary (along with the basis of p-points) by doing
def MomentumBasis(L,delta_x):
ns=np.array(range(0,int((L+2*delta_x)//delta_x)))
xs=ns*delta_x-L/2
ks=np.array(ns*2*np.pi/(ns[-1]))
l=len(ks)
ks[l//2:]-=2*np.pi
#ks=ks/delta_x
U=np.exp(1.j*np.outer(ns,ks))
ks=ks/delta_x
return ks,U
Now by getting out this unitary (ps,U=MomentumBasis(20,0.1)
) you can then apply U@psi
to rotate into the fourier basis. Take the wave-function in real space and rotate it into the fourer basis. Plot the wave-function squared in the momentum basis and see if (after measurement) in the momentum basis you are likely to get consistent answers for the momentum.
Show code cell content
# answer here
While our position resolution is good, our momentum resolution is not that great - it seems we can locate the momentum to within 5’sh.
b. Measuring Uncertainty#
To get a better handle on this, we want to be able to quantify the uncertainty. The measure of certainty essentially needs to be a measure of the spread of the wave-function which can be quantified by the variance of the wavefunction in the position or momentum basis - i.e.
\(\Delta X = \sqrt{\langle X^2 \rangle - \langle X \rangle^2}\)
\(\Delta P = \sqrt{\langle P^2 \rangle - \langle P \rangle^2}\)
Recall that you can get the expectation value of an observable \(O\) by doing psi @ O @ psi.T.conjugate()
Write a function def uncertainty(psi)
which returns the \(\Delta X\) and \(\Delta P\) and \(\Delta X \Delta P\). It will be convenient to normalize the wave-function as the first step of this function so you can send the function an un-normalized wave-function.
Now compute the uncertainty for the wave-functions above. You should quantify what we say above. The variance of the position is smallish but the variance of the momentum is much larger and so if we measured we would be getting widely different momentum.
Show code cell content
# answer here
Our wave-function does have a tuning paramater \(\alpha\) that will make it more or less concentrated in \(x\) (for example try \(\alpha=0.0001\) in the x-basis). Write some code to tune \(\alpha\) from \(10\) down to \(0.0001\). On the same plot, show as a function of \(\alpha\)
the value of \(\Delta X\)
the value of \(\Delta P\)
the value of \(\Delta X \Delta P\)
What is the minimum value of \(\Delta X \Delta P\) you can find. The uncertainty principle tells us that \(\Delta X \Delta P >0.5\). What’s the closest you get?
(you may find it useful here to work on a logarithmic grid of alpha - i.e.
alphas=np.arange(0.0001,10.01,0.01)
alphas=np.exp(np.arange(-10,10,1.0))
Show code cell content
# answer here
c. Minimizing Uncertainty#
So far we’ve been having trouble finding something that is compact both in real space and in momentum space. In particular, we haven’t found a wave-function that matches the Heisenberg limit. In this section, we are going to try to optimize for the minimum uncertainty. (you want to do this section with a grid spacing of delta_x=0.1
).
We will perform this optimization using python’s scipy.optimize
. To do this, we need to write an objective function def f(psi)
that takes the wave-function psi
and returns the thing we want to minimize.
You could do this by calling your uncertainty
function and just returning \(\Delta X \Delta P\). It will be useful to add to our objective function a small penalty for cases where either \(\Delta x\) or \(\Delta P\) are very large (we want to avoid this because we want to know \(x\) and \(p\) both pretty well and this helps the optimization because there is a subtle issue with our discretization as \(\Delta x \rightarrow 0\) and \(\Delta p\) gets very large). Therefore, also add \(\text{Real}(0.01 \max(\Delta X,\Delta P))\) to the return of our objective function.
Now we can call the optimization function. The optimization function needs to be sent out objective function and a guess - i.e. ans=scipy.optimize.minimize(f,guess)
. You can use as your guess \(\sin(0.3141592653589793 x)\) which you can check is not very localized.
Once the optimizations is done, the new wave-function is ans.x
. Check the uncertainty of your new wave-function and plot it both in the x-basis and the p-basis. You may notice that your wave-function looks gaussian. You can double check this by trying to fit/plot a gaussian on top of it.
Show code cell content
# answer here
Show code cell content
# answer here
d. Coherent States#
There is a special wave-function that we will consider later called a coherent state.
The wave-function for coherent states are
We will set \(\alpha=0.2\) for this part of the problem.
What is the uncertainty \(\Delta X\) and \(\Delta P\) for this (actually any) coherent state? Also plot the state.
Show code cell content
# answer here
e. Not coherent states#
So far we’ve seen examples of states which satiate the uncertainty bound while simultaneously having the same uncertainty in \(x\) and \(p\). Verify that this state satiates the Heisenberg uncertainty bound while having different values of \(\Delta X\) and \(\Delta P\)
where \(N\) is the normalization.
Show code cell content
# answer here
Exercise 2. Harmonic Oscillator in Real Space#
The Hamiltonian for the simple harmonic oscillator is
We know that \(\hat{P}=-i\hbar \frac{\partial}{\partial x}\) leaving us with
Working in units where \(\hbar=1\) and selecting \(m=\omega=1\) we are then left with
a. Hamiltonian and Eigenvalues#
Let’s start by generating the matrix for this Hamiltonian. We are again going to use \(-10 \leq x \leq 10\) via
delta_x=0.01
xs=SetupGrid(20,delta_x)
X,P,P2=SetupObservables(xs,delta_x)
You can generate the kinetic piece of the Hamiltonian in the same way that you did for the particle in a box. Like the finite well potential, the potential is still on the diagonal and is equal to \(x^2\). Notice with the units we are using, both the kinetic and potential must b multiplied by 1/2. Let us start by plotting the first 10 eigenenergies as dots (marker='o',linestyle='None'
). What important qualitative feature do you notice about the energy levels.
Show code cell content
# answer here
b. Eigenstates#
Now we want to plot the lowest three eigenstates. Put a label on the lowest three eigenstates by adding a keyword argument to your plot command (label="Eigenstate 0"
) and including a legend plt.legend()
before your plt.show()
.
Notice how many nodes (zeros) the n’th eigenstate has.
In class, you learned that the expected solution to the Harmonic Oscillator involves Hermite polynomials. In particular, the n’th eigenstate should be
where \(H_n\) are the Hermite polynomials.
In python to get the Hermite polynomials, use
Herm.hermval(xi, herm_coeffs)
where \(xi=\sqrt{m\omega/\hbar}\) and the herm_coefs
need to be a numpy array of size n+1 with a 1 in spot \(n\) and 0 otherwise.
Plot the expected answers on top of your generated answers.
You may occassionally find that the expected and numerical answer are upside down from each other. This is because the global phase of a wave-function doesn’t matter. This means you can always multiply a wave-function by -1 and it’s still physically the same. If you find this is happening to you, go ahead and multipy appropriately by -1 to fix it.
In addition, print out the uncertainty of these wave-functions. Are they anywhere near the Heisenberg limit.
Show code cell content
# answer here
c. Dynamics and Ehrenfest theorem#
Let’s start with the wave-function psi=(np.sqrt(0.1)*v[:,0]+np.sqrt(0.25)*v[:,1]+1.j*np.sqrt(0.65)*v[:,2]).astype(np.complex128)
We want to do time-evolution with this wave-function. You can use your two functions:
TimeEvolution
TimeEvolutionOperator
from your previous assignment.
We do want to do an update of the update function so that the animation is slightly more informative. In particular, we are going to want to plot some information about the potential and energy of our state. Because our y-axis is currently probability (which is at most 1), we will need to rescale the energy (which can get large) so that everything sanely fits on the same axis. To do this, we are going to rescale all our energies by scale=max_value/energy
where max_value
is the largest probability your wave-function gets (which we’ve already been calculating for animation) and energy
is the energy of our initial wave-function, which you can compute (recall the energy doesn’t change as a function of time).
Modifications of the Update Function
Plot the Harmonic Oscillator potential on your animation as
plt.plot(xs,potential*scale)
Draw a red-dashed line at the energy of our state - i.e.
plt.axhline(energy*scale)
Increaese the
ylim
to 1.5 of max_valueMake the
xlim
between -5 and 5 (this you may have to change for other animations - it’s not generic but makes everything look better)
Perform time-evolution with the initial wave-function with a \(\delta t=0.1\) and for a total time of \(t=40\). Store the snapshot of the wave-function at each time and animate it.
As in the particle in the box, modify the animation code above to plot a vertical line (plt.axvline(...)
) at the average position of \(x\) at each moment in time.
# answer here
# answer here
Show code cell content
# answer here
Show code cell content
# answer here
As in the particle in the box, takes your list of wave-functions as a function of time and generate from each wave-function a value of \(\langle x \rangle(t)\) and \(\langle p \rangle(t)\) showing (on the subplots as typical)
On the same plot show
\(\langle x \rangle(t)\)
\(\langle p \rangle(t)\)
\(\partial \langle x \rangle(t)/\partial t\)
What do you notice about the second two plots?
We are also going to add one more subplot this time: ax=plt.subplots(3,1)[1]
For our third subplot what we’d like to plot is
\(\frac{\partial \langle p(t) \rangle }{\partial t}\) generated by finite differences or
np.gradient
using your momentum curve above\(\langle V'(x) \rangle(t)\) This is the average of the derivative of the potential over the wave-function. Take the derivative of the potential by hand and then think about what observable you need to get this.
You should find these latter two bullet points are on top of each other.
This (and the previous plots) are a validation of Ehrenfest Theorem which tells us
and
Finally go ahead and make a phase plot of momentum vs position.
Show code cell content
# answer here
d. Coherent States#
We now wish to run time dynamics on a coherent state. Recall the coherent state (as above) is
where \(N\) normalizes the wave-function. We will use \(\alpha =3\)
Do time-dynamics on this system generating both the animation as well as the other two plots above for this new dynamics.
Add one more line to your plots: the position and velocity of a classical oscillator in the same Harmonic potential starting at \(x=3\sqrt{2}\)
What do you notice that’s interesting about the time-evolution of the coherent states.
Does something not quite look right in your animation?
Show code cell content
# answer here
# answer here
# answer here
# answer here
Exercise 3. Raising and Lowering Operators#
a. Raising and Lowering Operators#
With the Harmonic oscillator, we’ve learned that it is useful to look at raising and lowering operators.
Build both \(a^\dagger\) and \(a\) in the real space basis.
Verify that the operators are complex conjugates of each other (by daggering a and subtracting \(a^\dagger\) and making sure they are zero - i.e. check that the maximum of the absolute value of the difference is zero)
Look at the matshow of the upper left of the matrix
plt.matshow(a[0:10,0:10])
. Can you understand why this is what this looks like.
We expect that the raising operator has the property that when it acts on an eigenstate \(|n-1\rangle\) it should produce the eigenstate \(\sqrt{n}\) times eigenstate \(|n\rangle\). Let’s check that by plotting. Plot \(\sqrt{4} |\Psi_4 \rangle\) and \(a^\dagger |\Psi_3\rangle\) and verify that they give the same result. Again be cognizant that you might need to flip one of the states.
Similarly let’s check that \(a\) lowers a state. Verify that eigenstate 4 is the same as eigenstate 5 lowered with a coefficient of \(\sqrt{5}\).
Also verify that the lowering operator destroys the ground state - i.e. check that \(a^\dagger \Psi_0\) is actually zero. (Remember to check the scale of your graph as it might have some features but essentially be zero)
Show code cell content
# answer here
b. Verifying the norms and number operators#
Now we would like to further verify that the raising and lowering operators always act with the additional \(\sqrt{N}\) term. Since our eigenstates start normalized, if we compute the norm (np.linalg.norm
) of the eigenstates after they’ve been hit by the raising and lowering operators you should find that their norm is \(\sqrt{N}\). Verify this by
plotting the norm of the eigenstates (separately for the raising and lower operator) hitting the first ten eigenstates
plotting \(\sqrt{N}\)
and checking that they are the same
We can also check that \(a^\dagger a\) is a number operator. Plot again for the first ten eigenstates, the expectation value of \(a^\dagger a\) in those eigenstates.
Show code cell content
# answer here
c. Write it in the basis of number operators#
So far we’ve been looking at We would like to go ahead and write \(a^\dagger\) and \(a\) in the Harmonic Oscillator basis. In this basis, what we expect to see is that both the raising and lowering operators are just off-diagonal above and below. To rotate an operator from the position basis into the ?Harmonic oscillator basis, you want to do v.T.conjugate() @ O @ v
for an operator O
. Go ahead and rotate \(a\), \(a^\dagger\) and \(aa^\dagger\) into the Harmonic oscillator basis and look at up the top \(10 \times 10\) chunk. Does it make sense
# answer here
d. Warts#
So far everything has worked out well when we discretized things. In this section, we just mention a couple warts that you need to look out for.
The simplest thing is just that the eigenstates and eigenvalues start diverging from the true answer due to the finite L and discretization. This is very standard but you can see this here by plotting the eigenstates out to \(n=100\). What do you notice? To fix this, you can go ahead and increase \(L=40\) and see that the eigenstates out to \(n=100\) look fine.
Show code cell content
# answer here
We learned in quantum mechanics that \(x\) and \(p\) obey canonical commutation relations - i.e. \([X,P]=i\hbar\) (which means explicitly it’s the identity matrix times \(i\hbar\)).
Let’s test this. Take the \(X\) and \(P\) operators and compute their commutator. Do a matshow of the top \(10 \times 10\) piece of the matrix.
Show code cell content
# answer here
You may notice that you get something that is not diagonal at all. Instead their is \(i\hbar\) right off the diagonal. What’s going on!?
Remember that the rows and columns are labelled by \(x\). This means that what we expected was
But what we got was the piece that was just off the diagonal having \(i\hbar\) on it - i.e. you found that
This is very close especially since you could take \(\Delta x\) to be very very tiny. So we almost got the right thing.
What went wrong?
When you actually have finite matrices the trace of a commutator always has to be zero:
where the matrices in the trace got flipped because the trace is invariant with respect to cycling matrices. So it’s just impossible to have \(i\hbar\) down the diagonal. That said, given the closeness it doesn’t cause us much trouble.
Finally, we have learned that
is the Harmonic Oscillator Hamiltonian.
Go ahead and diagonalize this in the x-basis and plot the first 100 eigenvalues.
You’ll notice a set of eigenvalues that you are used to (which can correspond to the eigenvectors you are used to) and an erroneous set of eigenvalues. This is essentially coming from the fact that P@P
and P2
are different matrices. You might think that this is some technical issue but it’s actually deep and fundamental and causes all sorts of trouble in lattice gauge theory. It goes by the name Fermion doubling and you can see a description of the problem (and the piece that’s very relevant here) is you look at this Wikipedia article and look at the section on derivative discretization.
Show code cell content
# answer here
Show code cell content
# answer here