Gases#
Exercise 1. A Classical Atomic Gas#
In this exercise we are going to calculate some properties of an atomic gas in a box. In particular, we would like to show that the average energy \(U(T) = 3/2 NkT\).
In this exercise, the gas will be
ideal: There will be no interactions between the atoms so we can just simulate a single atom and multiply the energy by \(N\).
distinguishable particle: We will not have to worry about Boson or Fermion statistics.
quantum mechanical: We will assume that at a given moment in time each particle occupies a single-particle eigenstate of a particle-in-a-box.
The energy of a particle in a box in eigenstate \((n_x,n_y,n_z)\) (recall that \(n_i>=1\)) is
We will work in natural units where \(k_B=1\). In these units, we can compute \(E_{box}\) as
def Get_Ebox(L : float, m : float) -> float:
amu = 1.66e-27
m=m*amu
hbar=1.0545718e-34 #Joule seconds
kB=1.380649e-23 #Joule/Kelvin
return hbar**2*np.pi^2/(2*m*L**2)/kB
where L is in meters and m is in atomic mass units (e.g. Nitrogen is 28)
a. One Atom: Direct Sum#
Let’s consider a single atom of atomic mass 28 in a 3.33 \(\textrm{nm}^3\) box. The probability of seeing the atom in the eigenstate \((n_x,n_y,n_z)\) is
where the partition function
We can then find the expected energy as
One way to do this is to do the sum directly. While we should directly sum over all infinite \(n\), in practice we can only practically sum to a fixed value of \(n\). To figure out what a reasonable \(n\) is, notice that \(P\) decays exponentially in \(E/T\) which should become small when \(n_i^2 \gg T/E_{box}\). Therefore, let’s sum up to each of \(n_x,n_y,n_z \approx 10 \sqrt{T/E_{box}}\).
Go ahead and write a function which computes the energy
def GetEnergy(particle_gas,T : float, max_level :int=100)->float:
for the temperatures Ts=np.arange(0.01,2,0.1). How does the energy change with temperature?
Fit a line to the \(U(T)\) curve (ignoring the smallest \(T\)) and determine the slope. Is this what you expected?
We’d also like to compute how long it takes to compute somewhat higher temperatures. Report both the energy and the time it takes to compute it at \(T=10K\)
T=10
%time energy=GetEnergy(T,E_box)
### answer here
###answer here
### answer here
b. Monte Carlo#
In the previous problem, we computed the average energy by summing over all the energy levels. This sum starts to get expensive as the temperature goes up or the \(E_\textrm{box}\) goes down and the number of levels we have to sum over significantly increases. We would like to learn how to do this computation instead using Monte Carlo.
In Monte Carlo, you need to store a state for sytem. In this case, we need to store which integers \(n_x, n_y, n_z\) the atom is occupying which we can treat as a three-element array
n=np.zeros(3)
where a reasonable initial value for our state is going to be
initValue=max(1,int(np.sqrt(T/E_box)))
n[:]=initValue
We now wish to implement a Monte Carlo which returns a sample of the state proportional to \(\exp[-E(\vec{n})/T]\). Here’s an algorithm to accomplish this:
For many Monte Carlo steps:
Compute the old energy \(E_{old}\)
Choose a new state \(n_{new} = n_{old} + \textrm{change}\) where the change is randomly sampled array of 3 integers from between (-step_size,step_size). A reasonable step size is
max(1,int(0.4*initValue))Compute the new energy \(E_{new}\)
Keep the new state if both
the new state is legal (e.g. all \(n>=1\)) and
\(\exp[-(E_{new}-E_{old})/T] > r\) where \(r\) is a random number uniformly chosen from 0 to 1.
If you keep the new state, udpate the \(E_old\) energy and the state; otherewise revert to the old state.
At each Monte Carlo step, store in an array the current energy (
energy_samples.append(E_current))
Write a function
def MonteCarlo(T : float,E_box :float ,mc_steps=100_000):
which implements this Monte Carlo returning the array of energy samples.
Let’s start by running our Monte Carlo for
\(T=6\) K
\(L=3.33\) nm
28 amu
100,000 MC steps
When you have your energy samples plot them with
fig,ax=GetPlot()
ax.plot(energy_samples)
plt.show()
and compute the average energy (and respective error) using
mean, _, error, _ ,_=GetError(energy_samples[2000:])
ignoring the first 2000 steps to allow the system to equilibrate.
Check that you get (within error bars) the same answer as earlier and measure the time by
%time energy_samples = MonteCarlo(T, E_box=E_box, mc_steps=100_000)
### answer here
#### answer here
c. Specific Heat#
We now would also like to be able to compute the specific heat
There are two approaches to compute the specific heat. One approach is to simply take the derivative of the energy directly. In this case, where our energy is linear with temperature (as it seems to be currently) this is very easy (simply take the slope). More accurately, we can do cv_from_energy=np.gradient(energy,myTs) (in practice, this is better even when it appears linear with temperature because there might be some deviation from that).
Another way to compute the specific heat is using the fluctuation-dissipation theorem which tell us that the specific heat is related to the variance of the energy as
which we can compute directly from our energy samples.
Write code
def AddTemperature(energy_samples: np.ndarray, T, results: defaultdict):
results["T"].append(T)
...
results["energy"].append(...)
results["energy_error"].append(...)
results["variance"].append(...)
ll=len(energy_samples)//10
results["variance_error"].append(...)
return results
which takes the energy samples and adds to your results = defaultdict(list) dictionary the temperature, energy and variance (with respective errors). You can use GetError to help you compute the error.
Run you Monte Carlo to collect data for Ts=np.arange(0.01,2,0.1)
results = defaultdict(list)
for T in Ts:
energy_samples = MonteCarlo(...)
AddTemperature(energy_samples,T,results)
Make the following plots:
Energy
Energy vs. T from Monte Carlo
Energy vs. T from the analytic approach in part (a)
E=3/2T+zpe (e.g. the expected classical result) where zpe is the zero point energy (e.g
E_box*(1**2+1**2+1**2))
Specific Heat
Specific heat vs. T from the Monte Carlo estimator
Specific heat vs. T from the gradient of the Monte Carlo Energy (using
np.gradient)Specific heat vs. T from the gradient of the analytic energy
a horizontal line at 3/2 (e.g. the expected classical result)
### answer here
### answer here
### answer here
d. Seeing the energy levels#
So far we’ve managed to avoid seeing the fact that the energy levels are discrete. Go ahead and run your code again, this time with the following parameters:
\(L=3.33 \times 10^{-9} \times (10000)**(1/3)\)
100,000 MC Steps (do it first with just 10,000 for debugging)
Ts=np.logspace(-6, 1, 45)
Make the following plots
(1) \(U\) vs. \(T\)
(2) \(U\) vs \(T\) but only showing temperatures less then \(10^{-4}\).
I do this by doing
mask = Ts<1e-4
ax[1].plot(Ts[mask],energy[mask])
(3) The specific heat but make the x-scale logarithmic (e.g. ax[2].set_xscale('log)). Plot both the specific heat from the Monte Carlo estimator and the gradient of the energy. Also on this plot, draw a vertical line at E_box which corresponds to the energy quantization.
You should see that the energy significantly deviates from \(3/2 KT\) as you approach the zero point energy and that the specific heat goes to zero exponentially quickly once the temparature gets below the energy spacing.
### answer here
### answer here
Exercise 2. Carbon Monoxide#
a. Vibrational and Rotational Degrees of Freedom#
In this exercise, we are now going to work with a molecule with non-trivial internal degrees of freedom. In particular, we will look at carbon monoxide which has two atoms attached by a “spring” and so has both rotational and vibrational degrees of freedom.
The energy of Carbon Monoxide is
with a quantum state consisting of
\(\vec{n} \equiv n_x,n_y, n_z\) which has \(n_i \geq 1\)
\(j\) which has \(j \geq 0\)
\(n_v\) which has \(n_v \geq 0\)
\(m_j\) which has \(-j \leq m_j \leq j\)
We are going to modify our Monte Carlo code in the following way:
Introduce states for
j (initial value of \(\sqrt{T/E_{rot}}\) with step 40% of that (remember to make sure everything is at least 1 and doesn’t end up being 0))
n_v (initial value of \(T/E_{vib}\) with step 40% of that)
m_j (initial value of 0 and step that is 50% of the j step.)
Choose randomly whether you are going to do (1) a \((n_x,n_y,n_z)\) move, (2) a \(n_v\) move or a (3) \(j,m_j\) move.
Then accept a Monte Carlo move if the move is legal and \(\exp[-\beta E]\) is larger then a random number.
Using the following parameters, plot \(U(T)\) and the heat capacity per site.
E_rot = 2.77
E_vib = 3100.0
numAtoms=1_000
L=3.33e-9*(numAtoms)**(1/3)
E_box=Get_Ebox(L,28)
mc_steps=100_000 # you might want to run first with 10_000
Plot the energy and specific heat (both from the gradient of the energy and the Monte Carlo estimator). ON the specific heat graph, place horizontal lines at \(c_v=3/2\) and \(c_v=5/2\) and \(c_v=7/2\).
Also plot the theoretical specific heat curve on the specific heat graph which you can get from this function (which takes an array of temperatures)
def get_theory_cv_high_res(T_array):
cv_theory = []
for T in T_array:
ns = np.arange(1,200000)
e_t = E_box * ns**2
z_t = np.sum(np.exp(-e_t/T))
u_t = np.sum(e_t * np.exp(-e_t/T)) / z_t
u2_t = np.sum(e_t**2 * np.exp(-e_t/T)) / z_t
cv_t = 3 * (u2_t - u_t**2) / T**2
js = np.arange(0, 20000)
e_r = E_rot * js * (js + 1)
deg = 2 * js + 1
z_r = np.sum(deg * np.exp(-e_r/T))
u_r = np.sum(deg * e_r * np.exp(-e_r/T)) / z_r
u2_r = np.sum(deg * e_r**2 * np.exp(-e_r/T)) / z_r
cv_r = (u2_r - u_r**2) / T**2
x = E_vib / T
cv_v = (x**2 * np.exp(x)) / (np.exp(x) - 1)**2 if T >5 else 0
cv_theory.append(cv_t + cv_r + cv_v)
return np.array(cv_theory)
### answer here
### answer here
### answer here
### answer here
b. A better move#
We can do something slightly better with our moves. Since the energy doesn’t depend on our move of \(m_j\) it seems that it is not necessary at all. This isn’t correct; try turning off the \(m_j\) variable and you will find that you get the wrong answer.
Instead what we can do is to use a better move for \(m_j\). Instead of simply moving it in some range around where it already is, we can instead just pick a new \(j\) and then choose a legal value of \(m_j\). Then \(m_j\) is always legal and so it sounds like we are back to doing nothing. But we need to make one more modification to your code if we are going to make this change. We have to modify our acceptance probability by multiplying additionally by \(2(j_\textrm{new}+1)/(2*j_\textrm{old}+1)\). We will understand why this is in the Bosonic section. Go ahead and make this modification and see that you get the same result but with better error bars.
### answer here
### answer here
Exercise 3. Bosons#
Our goal in this exercise will be to compute properties of Bosonic particles. In particular we are interested in seeing
(1) That the fraction of the particles that are in the ground state becomes non-zero around the transition temperature and goes to 1 at zero temperature.
(2) That at low temperatures, the specific heat goes as \(T^{3/2}\)
(3) Around the transition temperature, that the maximum specific heat is greater then \(3/2 KT\)
a. Multiple Particles#
Our first step will be to modify our distinguishable particle code from exercise 1 to work with multiple particles. Our state now consists not just of \([n_x,n_y,n_z]\) but a value of \([n_x,n_y,n_z]\) for each atom - e.g n=np.zeros((numAtoms,3),dtype=int). Then at each Monte Carlo step, you can select an atom at random and essentially do all the same things as you did previously with the particle at that atom. At no point should you be copying arrays of size numAtoms.
Go ahead and make these modifications and run your code with 100 atoms. Keep your density constant so now we will use
E_box=Get_Ebox(3.33e-9*numAtoms**(1/3),28)
Interestingly, this will change the results somewhat because although the density stays constant, the ground state energy goes down and so you are less likely to notice the effect of quantization.
Also use
numAtoms=100
mc_steps=100_000
Plot the energy and specific heat as a function of temperature Ts=np.linspace(0.01, 2, 20). You should now be getting \(U(T)=3/2 NT\) and specific heat that goes as \(3/2 N\). In addition, add an additional observable to your code that measures what fraction of time you have particles in the \([1,1,1]\) state and plot that.
### answer here
### answer here
### answer here
### answer here
b. Indistinguishable Particles#
A useful fact about Markov Chain Monte Carlo: So far we’ve been making moves and then accepting/rejecting with a probability proportional to \(\exp[-\Delta E/T]\). This is correct because the move from \(\textrm{s}_\textrm{old}\) to \(\textrm{s}_\textrm{new}\) has the property that the probability that we consider moving to the new state, \(T(s_\textrm{old}\rightarrow T_\textrm{new})\) is the same as the probability that if we were in the new state we consider moving instead to the old state, \(T(s_\textrm{new}\rightarrow T_\textrm{old})\) - i.e.
If that was not the case, we would need to accept our move with probability
Now we need to modify our code so that it work for Bosons. In distinguishable systems, if we have the state:
particle 1 in state \((1,1,1,)\) and particle 2 in state \((1,2,2)\) that is a different state then if we have
particle 1 in state \((1,2,2,)\) and particle 2 in state \((1,1,1)\)
In the Bosonic system, these are both the same state.
We can now ask, where does this matters in our code? Let’s consider the following scenario. We move from
\(s_\textrm{old}\): particle 1: \((1,1,1)\); particle 2: \((1,2,2)\); particle 3: \((1,2,2)\).
and move particle 1 to \((1,2,2)\) leaving us in state
\(s_\textrm{new}\): particle 1: \((1,2,2)\); particle 2: \((1,2,2)\); particle 3: \((1,2,2)\).
In a distinguishable system for our algorithm, the probability \(T(s_\textrm{old}\rightarrow s_\textrm{new})\) and \(T(s_\textrm{new}\rightarrow s_\textrm{old})\) are the same because in both cases we have to choose particle 1 and move it by \(\pm (0,1,1)\).
But in a Bosonic system, to go from \(s_\textrm{new}\) to \(s_\textrm{old}\) we don’t need to move particle 1 back to \((1,1,1)\). Because all three particles are in the same state, we could also move particle 2 or particle 3 to \((1,1,1)\). While in the distinguishable case this leaves us in a different state (particle 3 and 1 are swapped), that has no meaning in the Bosonic system. You can’t distinguish particle 1 from particle 3.
This means that our ratio of \(T\)’s is different. The probability we move from old to new is \(T(s_\textrm{old}\rightarrow s_\textrm{new}) \propto 1/N\) and the probability we move from new to old is \(T(s_\textrm{new}\rightarrow s_\textrm{old})=3/N\).
Modify your code with the new acceptance ratio that compensates for this. To do so, it’s going to be convenient to keep track of how many particles are in any given state. A simple way to do this is to use a dictionary ns=defaultdict(int) which stores how many are in a given state. For example ns[(1,1,1)]=100 would indicate that all 100 atoms are in the \((1,1,1)\) state.
You will need to use this in the probability of accepting. In addition, once you’ve accepted you want to make sure you update the ns dictionary (e.g. decrease the old one and increase the new one.). It is also helpful to remove it from the dictionary if it is zero. You can do this by doing
if ns[s_old]==0:
del ns[s_old]
To verify your code is working, use the following parameters
numAtoms=100
E_box=Get_Ebox(3.33e-9*numAtoms**(1/3),28); print(E_box)
mc_steps=10_000_000
T=0.0025
and verify that you get something within error bars of the following:
At T= 0.0025 K, n0 fraction = 0.856611974 +/- 0.0032886910804233514 with an energy of 0.15312402609608677 +/- 0.0009032307336939385
### answer here
### answer here
c. Computing Properties#
We would now like to numerically verify the following properties of an ideal Bose gas:
That the low temperature energy deviates from the classical limit in a way that has nothing to do with the energy quantization.
That the temperature where the condensate fraction becomes non-zero and the specific heat is maxima is at the Boson transition temperature. In the thermodynamic limit, we expect this to be at T_E = 3.3125 \(\hbar^2 n^{3/2}/(mk_B)\) where \(n\) is the density. Write a function
def Get_Tc(L,N,m)which reports this idealized tmperature. The actual transition for \(N=100\) is going to be approximately 0.003K higher in temperature.The value of the specific heat per site at the maxima is 1.926. You may not have good enough statistics to resolve this but should see something larger then 1.5 and see something close to 2.
Low T specific heat goes as \(T^{3/2}\). You can check this by plotting the \(C_v(T)\) data on a log-log plot and fitting the low-T slope.
Verify these properties by computing on Ts=np.linspace(0.001, 0.015, 30)
\(U(T)\) curve
\(C_v(T)\) curve both on a normal and log-log plot
\(n(T)\) curves
for
numAtoms=100
E_box=Get_Ebox(3.33e-9*numAtoms**(1/3),28); print(E_box)
mc_steps=1_000_000
On these plots, draw a vertical line at the idealized transition temperature.
### answer here
### answer here
Acknowledgements:
Bryan Clark (original)
© Copyright 2026