Particle with a Finite Potential#
Author:
Date:
Time spent on this assignment:
In this assignment, we will consistently use grids and observables that we will set up in this way. This is very similar to how you set things for the particle in the box assignment. We’ve also updated your animation code to include a skip parameter which will allow you to only show every skip’th frame. This will make your animation generate much faster (if you choose a big skip). It can’t be called like
fig, ax = plt.subplots()
skip=10
max_value = np.max([np.max(np.abs(array)**2) for array in arrays[1:]])
animation = FuncAnimation(fig, update, frames=len(arrays)//skip, interval=10*skip, fargs=(skip,xs),repeat=False)
display(HTML(animation.to_jshtml()))
plt.close()
Note: You will modify and improve your animation funtion later in the assignment
Exercise 1. Particle in a Finite Well#
In the previous assignment we worked on a particle in a box with infinite walls. In this assignment, we are going to work instead with a potential well again looking at properties such as eigenstates and time dynamics.
a. Setting up the Hamiltonian#
Consider a particle in a box of length 40 spanning
\(-10 \leq x \leq 10\) at \(V=0\)
\(-20 \leq x \leq -10\) at \(V=0.5\)
\(10 \leq x \leq 20\) at \(V=0.5\)
Generate a discretize xs
with the same \(\delta x=0.01\) that you used previously - i.e. xs=SetupGrid(L,delta_x)
Also generate a numpy array potential
which has the correct potential at each value of \(x\). Verify this all looks correct by plotting \(V\) vs. \(x\).
It will also be convenient here to setup your observables using
X,P,P2=SetupObservables(xs,delta_x)
although we won’t use them till later
###ANSWER HERE
We know need to again set up the Hamiltonian
You already know how to turn the first term into a matrix. Because we are working in the \(x\) basis, the \(V\) term is also very simple: it is just a matrix with \(V(x)\) (np.diag(V)
) on the diagonal. To get the matrix for the total Hamiltonian, we just add these two matrices.
Set up this Hamiltonian, and then get the eigenvalues and eigenvectors.
Plot the first 20 eigenvalues. You should notice that there is two qualitatively different behavior happening that switches over when the eigenvalue energy gets greater then 0.5 (where our potential wall is.)
###ANSWER HERE
###ANSWER HERE
Now we would like to plot the energies in a slightly different way and plot the eigenvectors on top of them. Let’s start by again plotting the potential as a function of distance.
On top of this potential we are going to plot the lowest 10 energy levels. Plot them with plt.axhline(the_ith_energy,linestyle='--',alpha=0.5)
You should notice that some of the energies are less then the potential and some of them more then the potential.
Now on top of this, we want to plot the lowest 10 eigenstates. To make them easier to understand, let’s shift each eigensstate vertically by adding e[i]
to the i’th eigenstate - i.e. plt.plot(v[:,i]+e[i])
. Notice that if you do this, you have each eigenstate whose origin is centered on their energy.
###ANSWER HERE
b. Eigenstates in the forbidden region#
We now want to look at the eigenstates in the forbidden region. To begin with, let’s just qualitatively notice the difference between the eigenstates which are lower in energy then the finite wall and the ones that are higher in energy then the finite wall.
Q: As the wave-function goes to \(x>10\), do the eigenstates approach zero
for the eigenstates whose energy is greater then 0.5
for the eigenstates whose energy is less then 0.5
When the energy is larger then the potential, notice that the larger tha value of (E-V) (at a given \(x\)), the shorter the wavelength of the wave.
When the energy is smaller then the potential (i.e. in the forbidden region), we know that the eigenstate should decay exponentially.
We have learned that when an eigenstate is in the forbidden region it should decay exponentially.
That means that if we look at any of the eigenstates with \(e_i<0.5\), we should find that they approach zero exponentially. It might look that way by eye, but we want to check this explicitly.
First, we need to learn how to extract the part of the wave-function where the energy is less then 0.5. To do this, we should define a mask - i.e. mask=xs>10
to focus on the forbidden region.
Then we can plot the absolute value of the eigenstate (because the eigenstate can approach zero from below) against xs only in this region - i.e. plt.plot(xs[mask],v[:,n][mask])
for eigenstates \(n \in \{0,2,4\}\). Label each of your plots by including as a keyword label=n
and then add a plt.legend()
at the end.
Then go ahead and plot it on a log-scale - plt.yscale('log')
. If it looks linear on a log scale, this means that the wave-function is decaying exponentially. In practice, it may look like it diverges from that linear curve at large \(x\). This is essentially because you are running into machine precision. Notice that the lower-energy eigenstates have larger slopes (i.e. decay faster); this is because they are deeper in the forbidden region.
We can actually quantify this (this part is optional but interesting if you want to show that you get exactly the correct exponential decay). Our expectation is that the wave-function goes as \(e^{-kx}\) where
This means that if we plot the slopes squared of these three lines versus \(e_i-0.5\) where \(e_i\) is the eigenstate energy, it should be a straight line with a slope of -2.
In order to get the slopes of the three lines, we can use python to fit them. Again let’s define a mask that is simultaneously both in the forbidden region and not at large \(x\) where the machine precision starts to cause trouble: mask=np.logical_and(xs>10,xs<16)
Now use np.polyfit(x,y,1)[0]
to get the slope. Remember that y
needs to be the log of the wave-function (appropriately masked).
Then plot the slope-squared versus the energy difference from 0.5 You should fine a straight line.
###ANSWER HERE
c. Dynamics#
Like we did in the particle in the box, we are going to do some dynamics.
Because we are going to do some non-trivial time evolution in this assignment, it’s worth encapsulating your time-evolution into two functions:
TimeEvolutionOperator(H,delta_t)
should take the Hamiltonian and your time step and return \(e^{-iHt}\). It’s important to note that for each Hamiltonian, you only need to compute this time-evolution operator once even if you are time-evolving many different wave-functions. If you are careful about this, you will save yourself significant time.TimeEvolution(psi,M,steps)
which takes the initial wave-functionpsi
, the time-evolution operatorM
and the number of steps and returns a list of arrays that include the snapshot of the wave-function at every time step. We also want to do an update of the update function so that the animation is slightly more informative. In particular, we are going to want to plot some information about the potential and energy of our state. Because our y-axis is currently probability (which is at most 1), we will need to rescale the energy (which can get large) so that everything sanely fits on the same axis. To do this, we are going to rescale all our energies byscale=max_value/energy
wheremax_value
is the largest probability your wave-function gets (which we’ve already been calculating for animation) andenergy
is the energy of our initial wave-function, which you can compute (recall the energy doesn’t change as a function of time).
You will also need to make sure you’ve setup your observables
X,P,P2=SetupObservables(xs,delta_x)
Modifications of the Update Function
Add a line which takes the
energy
andmax_value
sent to it and computes a scalePlots a red-dashed line at the energy of our state - i.e.
plt.axhline(energy*scale,color='red',linestyle='--')
Plots the potential (scaled by the scale)
Gives
def update(frame, skip, xs, positions, potential, max_value, energy)
the relevant additional parameters. To do this you will need to compute the positions using yourX
fromSetupObservables
(outside the function)
We will start with the wave-function (same superposition of eigenstates as our particle-in-the-box but somewhat different eigenstates so a somewhat different wave-function)
Go ahead and animate this wave-function as a function of time again for a time-step \(\delta t=0.5\) and out to time \(T=200\).
Seperately and, in addition, plot the snapshots at \(T=0,100,200\) in one plot.
###ANSWER HERE
###ANSWER HERE
###ANSWER HERE
If you go back and look at your animation from the particle-in-a-box, it actually should look very similar. The main different here is that the particle has leaked out slightly of the hard box (-10 to 10) and so you see some weight out there.
Let’s also go ahead and plot some observables. Like you did in the particle-in-a-box go ahead and plot
the momentum and position as a function of time (on the same subplot)
the phase plot of momentum vs position
the energy as a function of time (making sure your scale is sane)
###ANSWER HERE
###ANSWER HERE
Again notice the marked similarity to a particle in a box with infinite walls.
d. Dynamics above the potential barrier#
In this problem, let us consider a superposition of eigenstates that is above the potential barrier. For example, let’s take the superposition of eigenstates 9 and 10 (both above the potential) as
Produce the same set of animations and observable plots for this wave-function.
Notice in this case that significant chunks of the wave-function are in the forbidden region.
###ANSWER HERE
###ANSWER HERE
###ANSWER HERE
Exercise 2. Eigenstates of Finite Barriers#
a. Diagonalize#
Now we will consider a potential with a finite barrier. Set the potential to be zero except for the range \(-1<x<1\) where we will have the potential be \(V_0=5\). Use \(L=20\)
Start by plotting the potential.
Then plot the lowest two 5 eigenvalues and 2 eigenvectors. What you will find is that the two lowest eigenvectors are nearly degenerate - they have almost the same energy. Why is this? Can you tell from the actual eigenvectors that this would have been the case?
For this problem, it will run much faster if you use a more reasonable \(\delta x = 0.1\)
delta_x=0.1
xs=SetupGrid(L,delta_x)
###ANSWER HERE
###ANSWER HERE
b. Effects of barrier height and width on eigenstates (probably remove)#
We would like to understand how the barrier height and width effects the 0’th eigenstate. Given that we know that there is an exponential decay of the eigenstate in the forbidden region, one interesting thing to look at is the minimum value of the eigenstate around zero. One way to look at this is to do np.min(np.abs(v[:,0][mask]))
where mask=np.logical_and(xs>-5,xs<5)
is centered away from 10 and -10 where it gets tiny simply because of the hard walls there.
Produce a series of potentials with width 2 centered around zero but height which scales from 0 to 4 (do jumps of 0.1 between 0 and 2 and jumps of 0.4 between 2 and 4 to avoid things being too slow).
Then plot this minimum value as a function of the height.
Do a similar calculation where you let the height be fixed to \(V_0=1\) and let the width change from 0 to 4 (with similar jumps).
###ANSWER HERE
###ANSWER HERE
###ANSWER HERE
###ANSWER HERE
Exercise 3. Faster Dynamics and Finite Barriers#
So far we’ve done dynamics by evaluating \(\exp[-i \Delta t H]\) for a finite \(\Delta t\). This has the advantage of being the exact dynamics but when \(L\) is large or \(\Delta x\) is small this gets very expensive to calculate limiting our ability to do interesting calculations.
Here we are going to introduce an alternative approach. Consider the time-evolution operator
When \(\Delta t \rightarrow 0\), we can write this as
Instead of thinking of time-evolution as applying the exponential of the Hamiltonian, we can think of time evolution as alternatively applying the exponential of the potential and kinetic terms.
If our wave-function is a vector in the position basis (which it mainly has been), then applying the potential term (treating V
as an array - earlier we called it potential
) is very easy because multiplying by a diagonal matrix is the same as pointwise multiplying by a vector - i.e. all three of these lines below are equivalent:
scipy.linalg.expm(-1.j*dt*np.diag(V)) @ psi
np.diag(np.exp(-1.j*dt*V)) @ psi
np.exp(-1.j*dt*V) * psi
The first line is the sort of thing we’ve been doing all along (but just with the potential piece).
The last line doesn’t involve a matrix at all but just a vector. Let us define M_half_x=np.exp(-1.j*dt/2*np.diag(V))
Unfortunately, the kinetic piece is complicated if our wave-function is in the position basis. But…. if our wave-function is in the momentum basis, then it would be easy because in the momentum basis that matrix is diagonal so we can just pointwise multiply by the vector M_p=np.exp(-1.j* dt* (ks**2)/(2*m))
.
But it’s easy to move the wave-function back and forth between the position and momentum basis via an FFT so we can always be in the “good” basis when we have to apply the operator.
This leaves us with the following algorithm to apply a single time-step \(\Delta t\) starting with a wave-function psi
in the position basis:
(1) apply M_half_x to psi
(2) apply M_half_x to psi
(3) switch to the momentum basis
psi_k=np.fft.fft(psi)
(4) apply M_p to psi_k
(5) switch to the real space basis
psi=np.fft.ifft(psi_k)
We then need to do this over and over again for many time-steps (for technical reasons, it’s actually more accurate to put (1) after (5)).
You are making a tiny error in \(\Delta t\) but this algorithm is much better then our previous approaches. You never need to build a big matrix diagonalize a Hamiltonian, etc. To implement this you will need the k-vectors ks=np.fft.fftfreq(len(xs), delta_x)*(2*np.pi)
There is one additional wart. This will only work on wave-functions that go to zero at the edge of your box but since you can use such a large box this shouldn’t be a problem.
a. Fast Time evolution for the Finite Well#
In this part, we will do time evolution again on the finite well but this time much more efficiently. The finite well parameters we will use are
dt=0.01
delta_x=0.1
L=40 (for a width of 80)
T=1 (this will be the time at which your Time Evolution runs - you will run it many times but each time saving a snapshot at each \(T\))
an initial wave-function \(|\Psi\rangle = \sqrt{0.1}|v_0\rangle + \sqrt{0.25}|v_1\rangle + i \sqrt{0.65}|v_2\rangle \) generated from the finite-well potential with \(V(x)=0.5\) when \(|x|>10\) and \(V(x)=0\) otherwise. (to get \(\Psi\), you will need to work with your Hamiltonian again which is slow - this is just for a test).
Write a function
def SetupWell():
# do stuff here
return psi_init,V,dt,T,xs,freqs
which returns the relevant initial wave-function, potential, time-step, time-per-snapshot, grid, and frequencies. recall that the frequencies that correspond to our fft are
freqs = np.fft.fftfreq(len(xs), delta_x)*(2*np.pi)
Now write a function def TimeEvolution(T,dt,V,psi,freqs)
which does the time-evolution returning the new psi
.
Run your time-evolution out to time \(400T\) like
psi,V,dt,T,xs,freqs=SetupWell()
arrays=[psi]
for step in range(0,400):
psi=TimeEvolution(T, dt, V, psi, freqs)
arrays.append(psi)
If you plot the probability density of the wave-function at time \(40T\) you should see the wave-function in the potential barrier with the weight to the left.
plt.plot(xs,V/200)
plt.plot(xs,np.abs(arrays[40])**2)
###ANSWER HERE
b. Plotting our time-evolution#
Now we would like to plot some data.
We want to plot
the probability density of the wave-function in real space at time \(50T\)
the probability density of the wave-function in k-space at time \(50T\)
the average position as a function of time (you shouldn’t need anything from SetupObservables for this such as the X or P)
the momentum as a function of time (you shouldn’t need anything from SetupObservables for this)
the total, kinetic, and potential energy as a function of time (on one plot) (you shouldn’t need anything from SetupObservables for this)
the potential
Write GetEnergy(psis,freqs)
and GetAverages(psis,xs,freqs)
which return these observables and plot them.
You can plot it however you want but I plotted it like this to get it into subplots
fig, ax = plt.subplots(2, 3)
fig.set_figwidth(22)
fig.set_figheight(12)
ax[0,0].plot(np.array(range(len(Es)))*dt,Es,label='Total Energy')
ax[0,0].plot(np.array(range(len(PEs)))*dt,PEs,label='Potential Energy')
ax[0,0].plot(np.array(range(len(KEs)))*dt,KEs,label='Kinetic Energy')
ax[0,0].legend()
ax[0,0].set_xlabel('Time')
ax[0,0].set_ylabel('Energy')
ax[0,1].plot(np.array(range(len(positions)))*dt,positions,label='Position')
ax[0,1].set_xlabel('Time')
ax[0,1].set_ylabel('Position')
ax[0,2].plot(np.array(range(len(momentums)))*dt,momentums,label='Momentum')
ax[0,2].set_xlabel('Time')
ax[0,2].set_ylabel('Momentum')
ax[1,0].plot(xs,np.abs(arrays[50])**2)
ax[1,0].set_xlabel('x')
ax[1,0].set_ylabel('Probability Density')
psi_k=np.fft.fft(arrays[50])
ax[1,1].plot(freqs,np.abs(psi_k)**2,'.')
ax[1,1].set_xlabel('k')
ax[1,2].plot(xs,V)
ax[1,2].set_xlabel('x')
ax[1,2].set_ylabel('Potential Energy')
plt.tight_layout()
plt.show()
###ANSWER HERE
c. Animate#
We now want to animate our system. Because we’re going to do this a couple times with different potential, let’s go ahead and write an animate function def animateMe(arrays,V)
that goes ahead and finds the max_value
and does calls the animation.
Then go ahead and animate your wave-function.
###ANSWER HERE
d. Time evolution of a “stationary” gaussian#
We now would like to go ahead and do a time-evolution of a stationary Guassian with the following parameters.
\(\psi_\textrm{init}=\frac{1}{N} \exp[-\frac{1}{2} x^2]\)
\(L=200\)
\(\Delta x=0.1\)
\(\Delta t=0.01\)
\(T=0.1\)
The potential is zero
Go ahead and write a def SetupGauss()
function and produce the same plots and animations as above
###ANSWER HERE
e. Hitting a Potential Barrier#
Let us now consider a potential barrier with
\(V(x) = V_0 \textrm{ for } -a<x<a \) and 0 otherwise where \(V_0=10\) and \(a=0.5\).
\(L=60\)
\(\Psi_{init} =\frac{1}{N} \exp[-0.5 (x+20)^2 + 5i x]\)
\(T=0.05\)
\(dt=0.1\)
In addition to the standard plots, please also make a plot of the amount of the probability to the left and right of \(x=0\).
###ANSWER HERE
Acknowledgements:
Bryan Clark (original)
© Copyright 2025