diff --git a/.gitignore b/.gitignore
index a4e48a8..da15d92 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,4 +1,3 @@
-problems
_build
-*solutions
+*solutions*
*ai
diff --git a/_update.sh b/_update.sh
new file mode 100644
index 0000000..18c3fc3
--- /dev/null
+++ b/_update.sh
@@ -0,0 +1,7 @@
+lecture_file=lecture_files/Lecture$1.md
+git add $lecture_file
+git commit -m "Adding lecture notes for Lecture $1"
+git push
+rm -rf _build
+jupyter-book build .
+ghp-import -n -p -f _build/html
diff --git a/lecture_files/Lecture5.md b/lecture_files/Lecture5.md
index dbdfd69..41a3b48 100644
--- a/lecture_files/Lecture5.md
+++ b/lecture_files/Lecture5.md
@@ -229,7 +229,7 @@ referred to as the [**Gibbs entropy**](https://en.wikipedia.org/wiki/Entropy_(st
$$S = -k_B \sum_j p_j \ln p_j$$
-You will shows this expression's equivalence to the Boltzmann definition
+You will show this expression's equivalence to the Boltzmann definition
of the entropy on [Problem Set 2](../../problems/ps_2/problem_set_2). We now have the expressions:
$$\begin{aligned}
diff --git a/lecture_files/figs/fig_17_1-01.png b/lecture_files/figs/fig_17_1-01.png
new file mode 100644
index 0000000..66cb513
Binary files /dev/null and b/lecture_files/figs/fig_17_1-01.png differ
diff --git a/lecture_files/figs/fig_17_2-01.png b/lecture_files/figs/fig_17_2-01.png
new file mode 100644
index 0000000..95da1a1
Binary files /dev/null and b/lecture_files/figs/fig_17_2-01.png differ
diff --git a/lecture_files/figs/fig_17_3-01.png b/lecture_files/figs/fig_17_3-01.png
new file mode 100644
index 0000000..380d744
Binary files /dev/null and b/lecture_files/figs/fig_17_3-01.png differ
diff --git a/lecture_files/figs/fig_17_4-01.png b/lecture_files/figs/fig_17_4-01.png
new file mode 100644
index 0000000..bd90d00
Binary files /dev/null and b/lecture_files/figs/fig_17_4-01.png differ
diff --git a/lecture_files/figs/fig_17_5-01.png b/lecture_files/figs/fig_17_5-01.png
new file mode 100644
index 0000000..84466f6
Binary files /dev/null and b/lecture_files/figs/fig_17_5-01.png differ
diff --git a/lecture_files/figs/fig_17_6-01.png b/lecture_files/figs/fig_17_6-01.png
new file mode 100644
index 0000000..b18d4c8
Binary files /dev/null and b/lecture_files/figs/fig_17_6-01.png differ
diff --git a/lecture_files/figs/fig_17_7-01.png b/lecture_files/figs/fig_17_7-01.png
new file mode 100644
index 0000000..aac22e9
Binary files /dev/null and b/lecture_files/figs/fig_17_7-01.png differ
diff --git a/lecture_files/figs/fig_17_8-01.png b/lecture_files/figs/fig_17_8-01.png
new file mode 100644
index 0000000..d221ea0
Binary files /dev/null and b/lecture_files/figs/fig_17_8-01.png differ
diff --git a/problems/ps_1/problem_set_1.md b/problems/ps_1/problem_set_1.md
new file mode 100644
index 0000000..6dfc595
--- /dev/null
+++ b/problems/ps_1/problem_set_1.md
@@ -0,0 +1,222 @@
+# Problem Set 1 (Due Wednesday, September 20, 2023)
+
+## Question 1: Polymer dimensions
+
+Ideal polymer chains are often described as undergoing *random walks* on
+a lattice. Like ideal gases, the monomers in an ideal polymer chain do
+not interact with each other; that is, the segments of the chain do not
+exclude volume and can overlap. Consider a **one-dimensional** ideal
+polymer chain composed of $N$ independent segments.
+
+One end of the chain is placed at the origin. A single chain conformation can then be
+generated by iteratively placing a single segment of the chain a
+distance $b$ in either the positive or negative $x$ dimension from the
+current chain end - *i.e.*, the chain elongates by taking "steps" along
+the one-dimensional coordinate. The end-to-end distance of the chain,
+$x$, is the distance between the origin and the end of the last segment
+placed.
+
+![image](pset_1_random_walk_fig.png)
+Figure 1 shows an example for $N = 8$ in which the end-to-end distance is after all 8 steps are taken.
+
+```{admonition} **(a)**
+
+For a one-dimensional ideal chain with $N$ segments and segment
+size $b$, calculate the probability, $p(x)$, that the end-to-end
+distance of the polymer is equal to $x$. Assume that there is an **equal
+likelihood** of taking a step in either the positive or negative
+direction for each chain segment.
+
+
+ Hints
+ First compute the probability that we take n steps to the right.
+
+
+```
+
+
+```{admonition} **(b)**
+
+For the chain described in part **a**, using Stirling's
+approximation, show that in the large
+$N$ limit the probability distribution $p(x)$ can be approximated by:
+
+$$p(x) = C \exp \left ( -\frac{x^2}{2Nb^2}\right )$$
+
+where $C=\frac{1}{\sqrt{2\pi Nb^2}}$ is a normalization constant
+to enforce that $\int_{-\infty}^{\infty} p(x) dx = 1$. Make sure
+to show how you obtain $C$.
+
+
+ Hints
+ Define the quantity a = x/Nb, where a << 1 in the large N limit,
+ and write a Taylor series expansion for ln(1-a) when appropriate.
+
+
+
+```
+
+
+```{admonition} **(c)**
+
+Show that the entropy of an ideal one-dimensional chain in the
+large $N$ limit is given by:
+
+$S(N, x) = -\frac{k_Bx^2}{2Nb^2} + S(N) + k_B \ln{C}$.
+
+where $C$ is our normalizing constant from above, and S(N) is the x-independent entropy term.
+
+```
+
+## Question 2: Magnetization of a paramagnet
+
+Consider a system of $N$ distinguishable, non-interacting atoms with
+magnetic dipole moments (spins) in a magnetic field $H$ at constant
+temperature. Each spin $s_i$ has a magnetic moment $\mu$ and can be in
+one of two states: parallel to the field ($s_i = 1$) or anti-parallel to
+the field ($s_i = -1$). The energy of each microstate is due only to
+interactions between the spins and the magnetic field, and is given by:
+
+$$E = -\sum_{i=1}^N s_i \mu H$$
+
+The magnetization of the material is defined as:
+
+$$\begin{aligned}
+M = \sum_i^N s_i \mu = -\frac{E}{H}
+\end{aligned}$$
+
+This model is commonly used to describe paramagnetic materials. In this
+problem, we will derive an expression for the ensemble-average
+magnetization of a paramagnet in a magnetic field.
+
+
+```{admonition} **(a)**
+
+Assuming that the paramagnet has a fixed energy $E$, write an
+expression for the entropy as a function of the number of spins aligned
+with the field, $N_+$, and the total number of spins, $N$.
+
+
+
+ Hints
+ Use the [Boltzmann formulation](https://en.m.wikipedia.org/wiki/Boltzmann's_entropy_formula) for entropy.
+
+
+```
+
+```{admonition} **(b)**
+
+Show that the expression for $N_+$ as a function of the number of
+spins $N$, the magnetic field strength $H$, the magnetic moment $\mu$,
+and the temperature $T$ (or as a function of $\beta \equiv 1/k_B T$) is
+
+
+$$ N_+ = \frac{N}{1+exp(-2H\beta\mu)}$$
+
+
+
+ Hints
+ Procedurally, this is almost identical to our previous lecture.
+
+
+```
+
+
+
+```{admonition} **(c)**
+Show that the magnetization of a paramagnet is given by:
+
+$$\begin{aligned}
+M = N\mu \tanh (\beta \mu H)
+\end{aligned}$$
+
+```
+
+## Question 3: Mixing entropy
+
+Consider two different ideal fluids containing $N_1$ and $N_2$
+molecules, respectively, for a total of $N$ molecules. All molecules
+exclude the same molar volume and are assumed to interact weakly with
+each other and with themselves, such that the potential energy of the
+system is negligible. The two fluids are initially completely demixed
+due to an impermeable partition; the partition is then removed and the
+fluids are allowed to mix at constant volume as illustrated in Figure 2.
+Assume also that the molecules of each fluid are **indistinguishable**
+from the molecules of the same fluid.
+
+![image](pset_1_mixing_entropy_fig.png)
+
+```{admonition} **(a)**
+
+Assume that the two fluids occupy a fictitious lattice that
+spans the available volume. Each molecule occupies a single lattice site
+and all lattice sites are occupied; thus, there are are $N_1$ lattice
+sites occupied by molecule 1 and $N_2$ lattice sites occupied by
+molecule 2. Using this approximation, calculate $\Omega(N_1, N_2)$,
+which is defined as the number of microstates for the mixture of fluids.
+
+
+
+ Hints
+ Identical arrangements of the fluids are indistinguishable.
+
+
+
+```
+
+```{admonition} **(b)**
+
+Derive an expression for the entropy change associated with
+mixing the two ideal fluids in terms of the mole fractions,
+$x_1 = N_1/N$ and $x_2 = N_2/N$, of the two components. That is,
+determine an expression for:
+
+$$\Delta S_{\textrm{mix}} = S_{\textrm{mixed}} - S_{\textrm{demixed}}$$
+
+
+ Hints
+ How many distinguishable configurations are there for totally demixed systems?
+
+
+
+```
+
+```{admonition} **(c)**
+
+Is the lattice model assumption reasonable for molecules
+occupying a continuous set of positions (rather than discrete points)?
+Why or why not?
+
+```
+
+## Question 4: Stirling's approximation
+
+
+```{admonition} Python Exercise
+
+Write a Python program that calculates the percent error of Stirling's
+approximation as a function of $N$, where Stirling's approximation is
+defined as:
+
+$$\ln N! \approx N \ln N - N$$
+
+Include with your solution a copy of your Python code (including
+comments as necessary), a plot of $\ln N!$ and Stirling's approximation
+as a function of $N$ up to $N=100$, and a plot of the error of
+Stirling's approximation as a function of $N$ up to $N=100$. Your grade
+for this problem will be based in part on the readability of your code
+and plots in addition to the accuracy of the solution.
+```
+
+**Note: this problem is intended to provide you with practice in Python
+programming prior to the assignment of the simulation project. If you
+need resources for learning to code in Python, see [here](https://sts.doit.wisc.edu/).**
+
+The two requested plots are provided below.
+
+![Comparison of Stirling's approximation vs. $\ln N!$ as a function of
+$N$.](pset_1_plot_stirling.png)
+
+![Relative error of Stirling's approximation vs. $N$, which approaches
+zero as $N$ exceeds
+$\approx 50$.](pset_1_plot_stirling_error.png)
diff --git a/problems/ps_1/pset_1_mixing_entropy_fig.png b/problems/ps_1/pset_1_mixing_entropy_fig.png
new file mode 100644
index 0000000..b80424b
Binary files /dev/null and b/problems/ps_1/pset_1_mixing_entropy_fig.png differ
diff --git a/problems/ps_1/pset_1_plot_stirling.png b/problems/ps_1/pset_1_plot_stirling.png
new file mode 100644
index 0000000..2acd664
Binary files /dev/null and b/problems/ps_1/pset_1_plot_stirling.png differ
diff --git a/problems/ps_1/pset_1_plot_stirling_error.png b/problems/ps_1/pset_1_plot_stirling_error.png
new file mode 100644
index 0000000..a5b1a5a
Binary files /dev/null and b/problems/ps_1/pset_1_plot_stirling_error.png differ
diff --git a/problems/ps_1/pset_1_random_walk_fig.png b/problems/ps_1/pset_1_random_walk_fig.png
new file mode 100644
index 0000000..4ae147e
Binary files /dev/null and b/problems/ps_1/pset_1_random_walk_fig.png differ
diff --git a/problems/ps_2/problem_set_2.md b/problems/ps_2/problem_set_2.md
new file mode 100644
index 0000000..6aa51af
--- /dev/null
+++ b/problems/ps_2/problem_set_2.md
@@ -0,0 +1,238 @@
+# Problem Set 2 (Due Wednesday, September 27, 2023)
+
+## Question 1: Expressions for entropy
+
+The Boltzmann entropy for the microcanonical ensemble is:
+
+$$S(N,V,E) = k_B \ln \Omega(N,V,E)$$
+
+where $\Omega(N,V,E)$ is the degeneracy (or number of unique
+microstates) in the microcanonical ensemble. An alternative, general
+definition of the entropy, referred to as the Gibbs entropy, is:
+
+$$S = - k_B\sum_i p_i \ln p_i$$
+
+where $p_i$ is the probability of microstate $i$ and the sum runs over
+all microstates.
+
+```{admonition} **(a)**
+Show that the Boltzmann and Gibbs entropy are equivalent for the microcanonical ensemble.
+
+ Hints
+Recall that according to the second postulate of statistical mechanics,
+all microstates in the microcanonical ensemble are equally probable.
+
+```
+
+```{admonition} **(b)**
+Starting from an appropriate partial derivative of the Helmholtz
+free energy, show that entropy can be expressed in terms of the
+{term}`canonical partition function` and the temperature as
+$S = k_B \left[T\left ( \frac{\partial \ln Z}{\partial T} \right )_{N,V} + \ln Z\right] $.
+
+ Hints
+
+$dF = -SdT - PdV + \mu dN$
+
+
+
+```
+
+```{admonition} **(c)**
+Show that the expression derived in part **b** reduces to the
+Boltzmann entropy if the canonical ensemble is dominated by a single
+energy level (i.e., the contributions of all other energy levels to the
+partition function are negligible).
+```
+
+```{admonition} **(d)**
+Calculate the Gibbs entropy of the canonical ensemble, and show
+that this expression for the entropy leads to the correct thermodynamic
+expression for the Helmholtz free energy ($F=E-TS$).
+```
+
+## Question 2: Pulling on a protein
+
+A simplified model for the protein keratin is a linear, long chain
+molecule consisting of $N$ units. Each unit can be in either an
+unstretched state with energy $\epsilon_\alpha$ and length $a$, or a
+stretched state with energy $\epsilon_\beta$ and length $b$. The chain
+is stretched at constant tension, $\tau$, which causes it to reach a
+total length $L$; $\tau$ is a force that is conjugate to $L$. Assume
+that stretching the chain (i.e. changing $L$) does not change the total
+chain volume, the chain is held at constant temperature, and each unit
+is independent.
+
+![image](pset_2_pulling_polymer_fig.png)
+
+
+ Hint
+
+Hint: for a system of $N$ distinguishable, independent subsystems for
+which the energy of the combined system, $E$, is equal to the sum of the
+energies of the subsystems, $E = \sum_i \epsilon_i$, the partition
+function of the combined system, $Z$, is related to the partition
+function of each subsystem, $z$, by:
+
+$$Z = z^N$$
+
+This relationship is true of any generalized partition function $\Phi$,
+as well as the canonical partition function, $Z$.
+
+
+
+```{admonition} **(a)**
+Using the first law of thermodynamics, show that the thermodynamic
+potential $B$ corresponding to this ensemble is expressed
+as $B = - PdV - SdT - L d\tau$.
+```
+
+```{admonition} **(b)**
+Derive an expression for a generalized partition function with
+fixed thermodynamic parameters that correspond to the natural variables
+of the potential derived in part **a**. Your result should be in terms
+of $\epsilon_a$, $\epsilon_b$, $a$, $b$, $\tau$, and $N$.
+
+
+ Hint
+
+Since each unit of the molecule is independent and the different units
+are distinguishable, we can use the first hint within this problem
+to factorize the partition function and write:
+
+$$\begin{aligned}
+ \Phi &= \phi^N
+
+\end{aligned}$$
+
+Here, $\phi$ is the single-unit partition function, which we see is a
+two-state model with one state characterized by energy $\epsilon_a$ and
+length $a$ and the other state characterized by energy $\epsilon_b$ and
+length $b$.
+
+
+```
+
+```{admonition} **(c)**
+
+Calculate the average length, $\langle L \rangle$, of the
+molecule as a function of the tension, $\tau$.
+```
+
+
+## Question 3: Magnetization of a paramagnet revisited
+
+In [Problem Set 1](../ps_1/problem_set_1.md), we used the microcanonical ensemble to show that the
+magnetization of a paramagnet is:
+
+$$\begin{aligned}
+M = N\mu \tanh (\beta \mu H)
+\end{aligned}$$
+
+The system is $N$ distinguishable, non-interacting spins that are fixed
+on a lattice and exposed to a magnetic field $H$ at constant
+temperature. Each spin $s_i$ has a magnetic moment of $\mu$, and can be
+in one of two states: parallel to the field ($s_i = 1$) or anti-parallel
+to the field ($s_i = -1$). The energy of a particular microstate is due
+only to interactions between the spins and the magnetic field:
+
+$$E = -\sum_{i=1}^N s_i \mu H$$
+
+The magnetization is defined as:
+
+$$\begin{aligned}
+M = \sum_i^N s_i \mu
+\end{aligned}$$
+
+```{admonition} Derive the magnetization of the system using the canonical ensemble.
+
+ Hint
+
+$\cosh(x) = \frac{e^x + e^{-x}}{2}$
+
+$\sinh(x) = \frac{e^x - e^{-x}}{2}$
+
+
+```
+
+## Question 4: Numerical calculation of the magnetization of a paramagnet
+
+As an alternative to the analytical derivation of the magnetization of a
+paramagnet, the Metropolis Monte Carlo algorithm can be used to
+calculate the ensemble-average magnetization. We will derive this
+algorithm in a [later lecture](../../lecture_files/Lecture10.md);
+in this question, you will apply the algorithm
+to compare to the analytical result in Question 3.
+
+Consider a system of $N=100$ distinguishable, non-interacting spins in a
+magnetic field $H$ at constant temperature. Each spin is fixed to a
+lattice position such that the relative positions of spins cannot
+interchange. The system can assume a set of configurations in which each
+spin is either parallel to the field or antiparallel to the field, with
+the energy of each configuration defined in Question 3. The Metropolis-
+Monte Carlo algorithm for calculating the ensemble-average magnetization
+consists of the following steps:
+
+1. Initialize the system in a random configuration.
+
+2. Randomly generate a new configuration, in which at least 1 spin has
+ switched orientation.
+
+3. Calculate the energy of the new configuration, $E_{\textrm{new}}$,
+ and old configuration, $E_{\textrm{old}}$.
+
+4. Calculate the probability of accepting the trial configuration,
+ $\alpha$, using the following equation:
+ $$\alpha = \text{min} \left ( 1, \exp \left \{ -\beta \left [ E_{\textrm{new}} - E_{\textrm{old}} \right ] \right \} \right )$$,
+ where $\beta = (k_B T)^{-1}$.
+
+5. Generate a uniform random number in the range \[0, 1\]. If the
+ random number is less than $\alpha$, then retain the new
+ configuration as the system configuration; the new configuration is
+ *accepted*. If the random number is greater than $\alpha$, then
+ retain the old configuration as the system configuration; the new
+ configuration is *rejected*. Note that all new configurations for
+ which the system energy is lowered are automatically accepted.
+
+6. Performing steps 2-5 defines a single Monte Carlo trial. Repeat
+ steps 2-5 for a large number of trials (discussed below).
+
+7. Calculate $\langle M \rangle$ by averaging values of $M$ over all
+ configurations sampled during the trials. Note that configurations
+ will be repeated when new configurations are rejected in step 5.
+
+Think carefully about how to efficiently implement this algorithm.
+**This problem is intended to provide you with practice in Python
+programming prior to the assignment of the simulation project.** Your
+grade for this problem will be based in part on the readability of your
+code and plots in addition to the accuracy of the solution.
+
+```{admonition} **(a)**
+Write a Python program that computes $\langle M \rangle$ for the
+paramagnetic spin lattice described in Question 3 using the Metropolis-
+Monte Carlo algorithm described above. Attach a copy of your code with
+appropriate comments explaining your implementation.
+```
+
+```{admonition} **(b)**
+Perform independent Monte Carlo simulations for field strengths
+of $\mu H =$ -3.0 $k_BT$ to $\mu H =$ 3.0 $k_B T$ in increments of 0.5
+$k_B T$. Each simulation should include 100,000 Monte Carlo trials to
+obtain accurate approximations of $\langle M \rangle$. Plot
+$\langle M \rangle$ vs. $\mu H$ and compare to the analytical result
+from Question 3.
+```
+
+The Python code for part **a** and **b** is included on the Canvas
+website. The plots below show the number of MC iterations necessary to
+obtain converged values of the ensemble-average magnetization and the
+comparison between the ensemble-average mangetization and the analytical
+result from Question 3.
+
+![Convergence of ensemble-average
+magnetization.](pset_2_Magnetization_convergence.png){width="70%"}
+
+![Comparison of the ensemble-average magnetization obtained from the
+Monte Carlo method to the analytical solution. Points indicate the model
+calculations and the line indicates the analytical
+solution.](pset_2_Magnetization_H.png){width="70%"}
diff --git a/problems/ps_2/pset_2_Magnetization_H.PNG b/problems/ps_2/pset_2_Magnetization_H.PNG
new file mode 100644
index 0000000..dbe917a
Binary files /dev/null and b/problems/ps_2/pset_2_Magnetization_H.PNG differ
diff --git a/problems/ps_2/pset_2_Magnetization_convergence.PNG b/problems/ps_2/pset_2_Magnetization_convergence.PNG
new file mode 100644
index 0000000..d0d1029
Binary files /dev/null and b/problems/ps_2/pset_2_Magnetization_convergence.PNG differ
diff --git a/problems/ps_2/pset_2_pulling_polymer_fig.png b/problems/ps_2/pset_2_pulling_polymer_fig.png
new file mode 100644
index 0000000..62786d6
Binary files /dev/null and b/problems/ps_2/pset_2_pulling_polymer_fig.png differ
diff --git a/problems/ps_3/1D_Ising-01.png b/problems/ps_3/1D_Ising-01.png
new file mode 100644
index 0000000..d050bfc
Binary files /dev/null and b/problems/ps_3/1D_Ising-01.png differ
diff --git a/problems/ps_3/problem_set_3.md b/problems/ps_3/problem_set_3.md
new file mode 100644
index 0000000..af5677b
--- /dev/null
+++ b/problems/ps_3/problem_set_3.md
@@ -0,0 +1,385 @@
+# Problem Set 3 (Due Monday, October 9, 2023)
+
+## Question 1: Isomerization process
+
+Consider an isomerization process, $\ce{A \rightleftharpoons B}$, where
+$A$ and $B$ refer to different isomeric states of a single molecule. A
+dilute gas containing $N_A$ molecules in state $A$ and $N_B$ molecules
+in state $B$ undergoes this isomerization process at constant volume and
+constant temperature. The isomerization from state $A$ to $B$ is
+associated with a change in the energy of the molecule,
+$\Delta \epsilon = \epsilon_B - \epsilon_A$, where $\epsilon_A$ and
+$\epsilon_B$ are internal energies associated with each of the two
+states, respectively. According to [Maxwell-Boltzmann statistics](https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_statistics),
+the equilibrium ratio of the $A$ and $B$ populations is given by:
+
+$$\frac{\langle N_A \rangle}{\langle N_B \rangle} = \frac{\Omega_A}{\Omega_B} e^{\beta \Delta \epsilon}$$
+
+where $\Omega_A$ and $\Omega_B$ are the degeneracies for a single
+molecule in state $A$ and $B$, respectively. Assume that the gas can be
+treated as [ideal](../../lecture_files/Lecture6.md) (i.e., all molecules are non-interacting and
+independent) regardless of whether a molecule is in state $A$ or state
+$B$, and that the single-molecule translational partition function,
+$z_{\textrm{trans}}$, is the same for each state.
+
+```{admonition} **(a)**
+
+Show that the system partition function,
+$Z(N_A, N_B, V, T)$, can be written in terms of the single-molecule partition
+function, $z_a$, for a molecule in state $A$, the single-molecule
+partition function, $z_b$, for a molecule in state $B$, the
+single-molecule translational partition function, $z_{\textrm{trans}}$,
+and the number of molecules in each state ($N_A$ and $N_B$) as
+
+$$Z = \frac{\left (\Omega_A z_{\textrm{trans}}e^{-\epsilon_A/kT} \right )^{N_A}}{N_A!} \frac{\left (\Omega_B z_{\textrm{trans}}e^{-\epsilon_B/kT}\right )^{N_B}}{N_B!}$$
+
+Explain your rationale with each step.
+
+ Hints
+Consider what entities can be distinguished from one another, and therefore
+separated in our partition function formulation.
+
+```
+
+**(b)** Using the result from part **a** and assuming that both $N_A$
+and $N_B$ are large, show that the Boltzmann distribution law follows
+from the condition that the chemical potential of a molecule in state
+$A$ is equal to the chemical potential of a molecule in state $B$
+(*i.e.*, $\mu_A = \mu_B$), for a system at constant volume and constant
+temperature.
+
+In part a, we derived an expression for the canonical partition function
+of the entire combined system. The relationship between the canonical
+partition function and the Helmholtz free energy is:
+
+$$F = -k_B T \ln Z$$
+
+We can write the differential form of the Helmholtz free energy of a
+two-component mixture as:
+
+$$dF = -SdT - PdV + \mu_A dN_A + \mu_B dN_B$$
+
+Therefore, we can write an expression for the chemical potential of each
+species as:
+
+$$\begin{aligned}
+ \mu_A &= \left ( \frac{\partial F}{\partial N_A} \right )_{N_B, V, T} \\
+ &= -k_B T \left ( \frac{\partial \ln Z}{\partial N_A} \right )_{N_B, V, T} \\
+ \mu_B &= -k_B T \left ( \frac{\partial \ln Z}{\partial N_B} \right )_{N_A, V, T}
+
+\end{aligned}$$
+
+We can write $\ln Z$ for the partition function from part **a** as:
+
+$$\begin{aligned}
+ \ln Z &= \ln \frac{\left (\Omega_A z_{\textrm{trans}}e^{-\epsilon_A/kT} \right )^{N_A}}{N_A!} \frac{\left (\Omega_B z_{\textrm{trans}}e^{-\epsilon_B/kT}\right )^{N_B}}{N_B!} \\
+ &= N_A \ln \left ( \Omega_A z_{\textrm{trans}}e^{-\epsilon_A/kT}\right ) - N_A \ln N_A + N_A + N_B \ln \left ( \Omega_B z_{\textrm{trans}}e^{-\epsilon_B/kT}\right ) - N_B \ln N_B + N_B
+
+\end{aligned}$$
+
+Using this expression, we can write the chemical potential for each
+species as:
+
+$$\begin{aligned}
+ \mu_A &= -k_B T \left ( \frac{\partial\ln Z}{\partial N_A}\right )_{N_B, V, T} \\
+ &= -k_B T \ln \left ( \Omega_A z_{\textrm{trans}}e^{-\epsilon_A/kT}\right ) - \ln N_A \\
+ &= -k_B T \ln \frac{\Omega_A z_{\textrm{trans}}e^{-\epsilon_A/kT}}{N_A} \\
+ \mu_B &=-k_B T \ln \frac{\Omega_B z_{\textrm{trans}}e^{-\epsilon_B/kT}}{N_B}
+
+\end{aligned}$$
+
+Setting $\mu_A = \mu_B$ yields:
+
+$$\begin{aligned}
+ \ln \frac{\Omega_A z_{\textrm{trans}}e^{-\epsilon_A/kT}}{N_A} &= \ln \frac{\Omega_B z_{\textrm{trans}}e^{-\epsilon_B/kT}}{N_B} \\
+ \frac{N_A}{N_B} &= \frac{\Omega_A z_{\textrm{trans}}e^{-\epsilon_A/kT}}{\Omega_B z_{\textrm{trans}}e^{-\epsilon_B/kT}} \\
+ \frac{N_A}{N_B} &= \frac{\Omega_A}{\Omega_B} e^{\beta \Delta \epsilon}
+
+\end{aligned}$$
+
+The last simplification is because the translational partition function
+for each state is identical as specified in the problem statement. We
+thus recover the Boltzmann distribution law.
+
+## Question 2: Adsorption of a gas mixture
+
+Consider a two-component mixture of two ideal gas species, denoted as
+$A$ and $B$, that can both adsorb to a surface. Each gas molecule of
+type $A$ adsorbs with an energy $\epsilon_i = -\epsilon_A$ while each
+gas molecule of type $B$ adsorbs with energy $\epsilon_i = -\epsilon_B$.
+Gas molecules of either type that are not adsorbed can be treated as an
+ideal gas, and gas molecules adsorbed on the surface can exchange with
+particles in the bulk. Gas particles exclude each other on the surface -
+*e.g.*, if an $A$ particle adsorbs to a part of the surface, a $B$
+particle cannot adsorb to the same part of the surface. The partial
+pressures of each gas can be written as $P_A = P x_A$ and $P_B = P x_B$,
+where $P$ is the pressure, $x_A$ is the mole fraction of gas $A$ in the
+mixture, $x_B$ is the mole fraction of gas $B$ in the mixture, and
+$x_A + x_B = 1$. Both the surface and bulk are at constant temperature
+and constant volume.
+
+**(a)** Derive a grand canonical partition function for the surface.
+Define any variables that you introduce.
+
+This question is an extension of a Langmuir isotherm by allowing for the
+adsorption of multiple components on a single lattice site. As in the
+Langmuir isotherm derivation, let us assume there are $N_s$
+distinguishable sites on the surface. Each site for a given
+configuration can be either unoccupied (0 energy), occupied by species
+$A$ ($-\epsilon_A$ energy), or occupied by species $B$ ($-\epsilon_B$
+energy). The partition function for a site in the grand canonical
+ensemble is then:
+
+$$\begin{aligned}
+ \xi &= \sum_j e^{-E_j/k_BT}e^{\mu_j N_j / k_B T} \\
+ &= 1 + e^{\beta(\epsilon_A + \mu_A^s)} + e^{\beta(\epsilon_B + \mu_B^s)}
+
+\end{aligned}$$
+
+Here, $\mu_A^s$ is the chemical potential of molecule type $A$ on the
+surface, $\mu_B^s$ is the chemical potential of molecule type $B$ on the
+surface. The partition function for the entire system of $N_s$ sites is
+then:
+
+$$\Xi = \xi^{N_s} = \left ( 1 + e^{\beta(\epsilon_A + \mu_A^s)} + e^{\beta(\epsilon_B + \mu_B^s)} \right )^{N_s}$$
+
+**(b)** Derive an expression for the fraction of the surface that is
+occupied by molecules of type $A$, $f_A$, and the fraction of the
+surface that is occupied by molecules of type $B$, $f_B$. Write the
+expressions for $f_A$ and $f_B$ in terms of the partial pressures of
+each component and any other constants that you choose to define.
+
+We can now express the fraction of the surface associated with each
+species. We define $N_A$ as the number of molecules of type $A$ adsorbed
+to the surface and $N_B$ as the number of molecules of type $B$ adsorbed
+to the surface. We can also write the thermodynamic potential
+corresponding to the grand canonical partition function as:
+
+$$\Sigma = -k_B T \ln \Xi$$
+
+Then:
+
+$$\begin{aligned}
+ f_A &= \frac{\langle N_A \rangle}{N_s} = -\frac{1}{N_s}\left ( \frac{\partial \Sigma}{\partial \mu_A^s} \right )_{\mu_B^s,V,T} \\
+ &= \frac{ e^{\beta(\epsilon_A + \mu_A^s)}}{ 1 + e^{\beta(\epsilon_A + \mu_A^s)} + e^{\beta(\epsilon_B + \mu_B^s)}} \\
+ f_B &= \frac{\langle N_B \rangle}{N_s} = -\frac{1}{N_s}\left ( \frac{\partial \Sigma}{\partial \mu_B^s} \right )_{\mu_A^s,V,T} \\
+ &= \frac{ e^{\beta(\epsilon_B + \mu_B^s)}}{ 1 + e^{\beta(\epsilon_A + \mu_A^s)} + e^{\beta(\epsilon_B + \mu_B^s)}}
+
+\end{aligned}$$
+
+In the multicomponent system, chemical equilibrium is satisfied by
+$\mu_A^s = \mu_A^g$ and $\mu_B^s = \mu_B^g$ where
+$\mu_A^g = \beta^{-1} \ln (\beta P_A\lambda_A^3)$ is the chemical
+potential in the gas phase, $P_A$ is the partial pressure of molecule
+$A$, and $\lambda_A$ is the thermal de Broglie wavelength of molecule
+$A$. Substituting in these expressions, we obtain the following:
+
+$$\begin{aligned}
+ f_A &= \frac{ \beta P_A\lambda_A^3 e^{\beta\epsilon_A }}{ 1 + \beta P_A\lambda_A^3 e^{\beta\epsilon_A } + \beta P_B\lambda_B^3 e^{\beta\epsilon_B }} \\
+ f_B &= \frac{ \beta P_B\lambda_B^3 e^{\beta\epsilon_B }}{ 1 + \beta P_A\lambda_A^3 e^{\beta\epsilon_A } + \beta P_B\lambda_B^3 e^{\beta\epsilon_B }}
+
+\end{aligned}$$
+
+If we define
+$P_A^0 =\left ( \frac{1}{\beta\lambda_A^3}\right )e^{-\beta \epsilon_A}$
+and
+$P_B^0 =\left ( \frac{1}{\beta\lambda_B^3}\right )e^{-\beta \epsilon_B}$
+we obtain an expression similar to the Langmuir isotherm:
+
+$$\begin{aligned}
+ f_A &= \frac{P_A / P_A^0}{1 + P_A / P_A^0 + P_B / P_B^0} \\
+ f_B &= \frac{P_B / P_B^0}{1 + P_A / P_A^0 + P_B / P_B^0}
+
+\end{aligned}$$
+
+This expression shows that the fraction surface coverage of a component
+increases if either its partial pressure increases (via $x_A$) or if the
+binding energy $\epsilon_A$ becomes larger, leading to an decrease in
+$P_A^0$.
+
+## Question 3: 1D Ising model
+
+In class, we discussed the **Ising model** for a magnetic system in
+which a set of $N$ distinguishable magnetic dipoles are each
+characterized by a spin, $s_i$, that has one of two discrete values:
+$s_i = \pm 1$. The total magnetization, $M$, can be approximated as the
+sum of all spins, $M = \sum_{i=1}^{N} \mu s_i$. Spins are placed on a
+fixed lattice and can interact with an external magnetic field, $H$, and
+other nearest-neighbor spins. The energy with which spin $i$ interacts
+with its neighbors is:
+
+$$\epsilon_i = - J \sum_j^\prime s_i s_j\$$
+
+Here, $J$ is the value of the coupling parameter (with units of energy)
+that describes the interaction energy between each *pair* of
+nearest-neighbor spins. The sum $\sum_j^\prime$ indicates a sum over all
+$n$ nearest-neighbor spins of $i$ on the lattice. The *total* energy of
+a single microstate is:
+
+$$E = -\sum_i^N \mu H s_i - \frac{J}{2} \sum_{i}^N \sum_j^\prime s_i s_j\$$
+
+The factor of $1/2$ in front of the interaction energy term accounts for
+the double-counting of interactions between pairs of spins when summing
+over the entire system.
+
+In this problem, we will analytically solve the **1D Ising model**,
+which refers to a set of spins that are placed on a one-dimensional
+lattice. Each spin therefore only has two nearest-neighbors. We will
+assume **free boundary conditions**, which means that the spins at the
+beginning and end of the chain ($s_1$ and $s_N$) only have a single
+neighbor. An example of a single microstate of a 1D Ising model is
+schematically drawn below.
+
+![image](1D_Ising-01.png){width="30%"}
+
+**(a)** Show that in the zero field limit ($H=0$), the partition
+function for the 1D Ising model is:
+
+$$\begin{aligned}
+Z &=2 \left[2 \cosh(\beta J)\right]^{N-1}
+\end{aligned}$$
+
+Hint: Consider each *pair* of spins when simplifying the partition
+function.
+
+The energy of a configuration in the absence of a field ($H=0$) is:
+
+$$\begin{aligned}
+E &= -\frac{J}{2} \sum_i^N\sum_j^\prime s_i s_j
+\end{aligned}$$
+
+The full partition function, accounting for all possible states, is
+then:
+
+$$\begin{aligned}
+Z &= \sum_{s_1=\pm 1} \sum_{s_2=\pm 1} \dots \sum_{s_N =\pm 1}\exp \left ( \frac{\beta J}{2} \sum_i^N\sum_j^\prime s_i s_j \right )
+\end{aligned}$$
+
+We can simplify the expression for the energy by recognizing that spin
+$i$ interacts only with spin $i-1$ and spin $i+1$. We can exactly count
+all of these interactions *without* overcounting by starting at spin $1$
+and counting interactions between spin $i$ and spin $i+1$ only, while
+summing over all spins. We exclude the last spin because there is no
+$i+1$ interaction. This gives:
+
+$$\begin{aligned}
+E &= -J \sum_i^{N-1} s_i s_{i+1} \\
+&= -J \sum_i^{N-1}\sigma_i^\prime
+\end{aligned}$$
+
+Here, $i$ now indicates the index of the pair of spins and
+$\sigma_i^\prime = s_i s_{i+1}$. We have eliminated the factor of $1/2$
+because there is no longer any overcounting when we sum over the $N-1$
+interactions as opposed to summing over the $N$ spins. The full
+partition function is then:
+
+$$\begin{aligned}
+Z &= \sum_{s_1=\pm 1} \sum_{s_2=\pm 1} \dots \sum_{s_N =\pm 1}\exp \left ( \beta J \sum_i^{N-1}\sigma_i^\prime \right )
+\end{aligned}$$
+
+The outer sums of this partition function iterate over all possible
+arrangements of spins in the system to enumerate all possible
+microstates. However, for a 1D system, we can write each arrangement by
+summing over each *pair* of spins using the index as noted above. In
+general, each pair of spins can only be in one of two states: the spins
+can either be aligned or unaligned. The alignment of one pair of spins
+does not affect the next pair of spins - while individual spins are
+correlated, pairs of spins are uncorrelated or independent. We can thus
+factorize the partition function into $N-1$ independent
+"single-pair-of-spins" partition functions in analogy with the
+factorization of partition functions performed for the ideal gas:
+
+$$\begin{aligned}
+z &= \exp \left ( \beta J \sum_{ \sigma_i=\pm 1} \sigma_i^\prime \right )\\
+&= \exp \left (\beta J \right ) + \exp \left (-\beta J \right )
+\end{aligned}$$
+
+There are two terms in the single-pair-of-spins partition function
+accounting for the two spins either being aligned or unaligned. Pairs of
+spins are distinguishable because they are positioned on a lattice,
+allowing us to write the full partition function in terms of the
+single-pair-of-spins partition function without dividing by $(N-1)!$.
+Finally, we recognize that we have to add a prefactor of 2 to the full
+partition function because we can generate a unique microstate by
+inverting the sign of all spins, doubling the number of possible states.
+The full partition function is then:
+
+$$\begin{aligned}
+Z &= 2z^{N-1} \\
+&= 2\left[ \exp \left (\beta J \right ) + \exp \left (-\beta J \right ) \right]^{N-1} \\
+&= 2\left[2 \cosh(\beta J)\right]^{N-1}
+\end{aligned}$$
+
+**(b)** Calculate the ensemble-average energy per spin in the limit that
+$N \rightarrow \infty$.
+
+We can obtain the average energy from an appropriate derivative of the
+partition function. We can relate the partition function to the
+ensemble-average energy using the expression from Lecture 4:
+
+$$\begin{aligned}
+\langle E \rangle &= -\left ( \frac{\partial \ln Z }{\partial \beta}\right )_{N,V}
+\end{aligned}$$
+
+We can first simplify $\ln Z$:
+
+$$\begin{aligned}
+Z &= 2\left[2 \cosh(\beta J)\right]^{N-1} \\
+\ln Z &= \ln \left ( 2\left[2 \cosh(\beta J)\right]^{N-1} \right ) \\
+&= \ln 2 + (N-1) \ln 2 + (N-1) \ln \cosh(\beta J)
+\end{aligned}$$
+
+The first two terms have no $\beta$ dependence, so when we evaluate the
+derivative only the third term survives. The average energy is then:
+
+$$\begin{aligned}
+\langle E \rangle &= -(N-1) J \frac{\sinh(\beta J)}{\cosh(\beta J)} \\
+&= -(N-1) J \tanh(\beta J)
+\end{aligned}$$
+
+We assume that as $N\rightarrow \infty$, $N-1 \approx N$. The
+ensemble-average energy per spin is then:
+
+$$\begin{aligned}
+\frac{\langle E \rangle}{N} &= -\frac{N-1}{N} J \tanh(\beta J) \\
+&\approx -J \tanh(\beta J)
+\end{aligned}$$
+
+## Question 4: Monte Carlo simulation of the Ising model
+
+In Problem Set 2, we introduced the Metropolis Monte Carlo algorithm as
+a means of computing ensembe-average quantities. In Problem Set 2, we
+studied the magnetization of non-interacting spins. We will now use the
+same method to study the 1D Ising model which we analytically studied in
+##Question 3.--
+
+Consider a 1D Ising model in which $N = 100$ spins are placed on a 1D
+lattice, such that each lattice point has $n=2$ nearest-neighbors.
+Assume that the system has free boundaries so that spin $1$ and spin
+$100$ only have 1 nearest-neighbor (this directly mimics the problem
+from ##Question 3).--
+
+**(a)** Write a Python program that computes $\langle E \rangle / N$ for
+the 1D Ising model using the Metropolis Monte Carlo algorithm described
+above. Attach a copy of your code with appropriate comments explaining
+your implementation.
+
+**(b)** Perform independent Monte Carlo simulations for $J =$ -3, -2,
+-1, 0, 1, 2, and 3 $k_B T$. Each simulation should include 100,000
+trials to obtain accurate approximations of $\langle E \rangle / N$.
+Plot $\langle E \rangle / N$ vs. $J$ and compare to the analytical
+result from ##Question 3.--
+
+The Python code for part **a** and **b** is included on the Canvas
+website. The plots below show the number of MC iterations necessary to
+obtain converged values of the ensemble-average energy and the
+comparison between the ensemble-average energy and the analytical result
+from ##Question 3.--
+
+![Convergence of ensemble-average
+energy.](pset_3_Energy_convergence.png){width="70%"}
+
+![Comparison of the ensemble-average energy obtained from the Monte
+Carlo method to the analytical solution. Points indicate the model
+calculations and the line indicates the analytical
+solution.](pset_3_Energy_J.png){width="70%"}
diff --git a/problems/ps_3/pset_3_Energy_J.PNG b/problems/ps_3/pset_3_Energy_J.PNG
new file mode 100644
index 0000000..24e69d8
Binary files /dev/null and b/problems/ps_3/pset_3_Energy_J.PNG differ
diff --git a/problems/ps_3/pset_3_Energy_convergence.PNG b/problems/ps_3/pset_3_Energy_convergence.PNG
new file mode 100644
index 0000000..56a917b
Binary files /dev/null and b/problems/ps_3/pset_3_Energy_convergence.PNG differ
diff --git a/problems/simulation_project.md b/problems/simulation_project.md
new file mode 100644
index 0000000..7ffc1bd
--- /dev/null
+++ b/problems/simulation_project.md
@@ -0,0 +1,255 @@
+# Simulation Project (Due Monday, November 6, 2023)
+
+**General point deductions**
+For each plot, if you are missing the title, axis label(s), or have not labeled
+your units, you will incur a 1 point deduction per missing item per plot.
+
+## Resources for learning Python
+1. The links for CS 301: Intro to Data Programming are useful for setting up Python on multiple platforms. The class website is [here](http://pages.cs.wisc.edu/~cs301-1/).
+2. DoIT manages subscriptions to Lynda.com, which contains ~2 hour long training videos on Python. See the links [here](https://sts.doit.wisc.edu/) and [here](https://www.lynda.com/search?q=Python).
+3. [Google’s Python class](https://developers.google.com/edu/python/?csw=1)
+4. [Python tutorial](https://docs.python.org/3/tutorial/)
+
+
+## Question 1: Ferromagnetic phase transition
+
+Ferromagnetic materials undergo an order-disorder transition as a
+function of temperature. The 2D Ising model is able to reproduce this
+transition. Recall that the [Ising model](../lecture_files/Lecture8A) consists of $N$ distinguishable
+spins on a lattice. Each spin has one of two discrete values:
+$s_i = \pm 1$. The magnetization, $M$, is the sum of all spins,
+$M = \sum_{i=1}^{N} \mu s_i$. Spins can interact with an external
+magnetic field, $H$, and other nearest-neighbor spins. For this problem
+we will assume $H=0$. The energy with which spin $i$ interacts with its
+neighbors is:
+
+```{math}
+\epsilon_i = - J \sum_j^\prime s_i s_j
+```
+
+$J$ is the coupling parameter (in the same units as energy) that determines the
+interaction energy between each *pair* of nearest-neighbor spins. The
+sum $\sum_j^\prime$ indicates a sum over all $n$ nearest-neighbor spins
+of $i$ on the lattice. The *total* energy of the system is:
+
+$$E_\nu = - \frac{J}{2} \sum_{i}^N \sum_j^\prime s_i s_j$$
+
+The factor of $1/2$ accounts for the double-counting of interactions
+between pairs of spins when summing over the entire system. In class, we
+used a mean-field approximation to show that
+
+$$T_C = \frac{nJ}{k_B}$$
+
+$T_C$ is the temperature at which the system transitions from non-zero
+magnetization to zero magnetization. However, the mean-field
+approximation introduces significant errors. In this problem, we will
+show that Monte Carlo simulations can identify a more accurate
+expression for $T_C$.
+
+
+```{admonition} **(a)**
+
+Develop a simulation representation of an Ising model in which
+$N = 400$ spins are placed on a 2D, periodic, square lattice, such that
+each lattice point has $n=4$ nearest-neighbors. Plot the absolute value
+of the ensemble-average magnetization per spin,
+$| \langle M \rangle / N |$, and the heat capacity, $C_V$, as a function
+of temperature for a coupling parameter $J = 1.0$.
+
+Choose a temperature range of $T = 0.1$ $J/k_B$ to $T = 6.0$ $J/k_B$ in increments of $0.1$
+$J/k_B$. The coupling parameter is expressed in dimensionless units;
+accordingly, $J$ is the unit of energy and $J/k_B$ is the unit of
+temperature.
+
+I suggest running for at least 1,000,000 trials, saving the
+magnetization every 500 steps, and averaging the results from multiple
+independent simulations. Explain any other choices you make for
+simulation parameters. Consider which pairwise interactions actually
+contribute during evaluation of the MC acceptance condition; it is not
+necessary to compute the energy of the entire lattice.
+
+20 points available, click for grading breakdown
+
+- Magnetization plot: 5 pts
+- $C_{v}$ plot: 5 pts
+- Correct plot shape 4 pts
+- Correct smoothness (sampled a few times or reached good
+ convergence): 2 pts
+- Explain how each curve was calculated: 4 pts
+
+
+```
+
+```{admonition} **(b)**
+
+A [first-order phase transition](https://en.wikipedia.org/wiki/Phase_transition#Ehrenfest_classification)
+is characterized by a discontinuity in a first derivative of the free energy with respect to
+the temperature when the system undergoes a phase transition, while a
+[second-order phase transition](https://en.wikipedia.org/wiki/Phase_transition#Ehrenfest_classification) is characterized by the divergence of
+a second derivative of the free energy with respect to temperature when
+the system undergoes a phase transition. Based on your results in part
+**a**, is the ferromagnetic order-disorder transition classified as a
+first or second order phase transition? Explain your rationale.
+
+10 points available, click for grading breakdown
+
+- Correct order identified: 5 pts
+
+- Rationale: 5 pts
+
+```
+
+```{admonition} **(c)**
+Determine an approximate relationship between $T_C$ and $J$
+(assuming $J>0$) for the 2D Ising model using your simulations. You may
+assume that $T_C$ and $J$ are linearly related. Compare your expression
+to the mean-field result and the exact Onsager solution for the 2D Ising
+model.
+
+15 points available, click for grading breakdown
+
+- Comparison: 5 pts
+
+- Show that Ising model is better: 5 pts
+
+- Explicitly write Onsager's solution: 3 pts
+
+- Multiple J values tested: 2 pts
+
+```
+
+```{admonition} **(d)**
+Explain the competition between driving forces in the Ising
+model and why the values of the magnetization identified at temperatures
+above and below the Curie temperature are obtained. Assume that $J>0$ in
+your explanation.
+
+5 points available, no grading breakdown
+```
+
+## Question 2: Molecular dynamics simulations of a Lennard-Jones fluid
+
+Consider a set of particles that interact via a {term}`Lennard-Jones potential`.
+This system, referred to as a *Lennard-Jones fluid*, captures the
+behavior of simple solvents. Develop a simulation model for a
+Lennard-Jones fluid in which $N = 125$ particles are placed in a 3D,
+periodic, cubic simulation box. Assume particles only interact via a
+Lennard-Jones potential with characteristic energy $\epsilon$ and length
+$\sigma$ which set the energy and length scales of the simulations,
+respectively.
+
+Recommendations
+
+- Use a dimensionless timestep of $\Delta t = 0.001$.
+
+- Assume the mass of each particle is $m=1.0$.
+
+- Set the Boltzmann constant to $k_B = 1.0$.
+
+- Sample and save system properties every 100 timesteps.
+
+- For your final simulations, run for at least 100,000 timesteps (this
+ should take $< 15$ mins).
+
+- Reduce the number of particles and
+simulation timesteps while you are testing code.
+- Save your trajectories (positions and velocities), to prevent running the
+code an excessive number of times.
+
+
+
+```{admonition} **(a)**
+Simulate a Lennard-Jones fluid initialized at a temperature
+$T=1.2$ $\epsilon/k_B$ and a density $\rho = 0.85$ particles/$\sigma^3$
+using Molecular Dynamics with a velocity Verlet integrator and no
+thermostat. Plot the kinetic energy, potential energy, and total energy
+per particle as a function of the simulation time and comment on their
+relative values. Indicate the approximate number of timesteps necessary
+for the system to reach equilibrium.
+
+15 points available, click for grading breakdown
+
+- Plot Kinetic, Potential and Total Energy: 5 pts
+
+- Constant energy demonstrated: 5 pts
+
+- Indicated approx number of steps: 5 pts
+
+
+```
+
+```{admonition} **(b)**
+Using an Andersen thermostat, rerun the molecular dynamics
+simulation of the same system with the temperature maintained at
+$T = 1.2$ $\epsilon/k_B$ using a coupling constant $\eta = 1$. Plot the
+kinetic energy, potential energy, and total energy per particle as a
+function of the simulation time, and indicate the number of timesteps
+necessary for the system to reach thermal equilibrium. Compare features
+of these plots to the results from part **a**.
+
+15 points available, click for grading breakdown
+
+- Plot Kinetic, Potential and Total Energy: 5 pts
+
+- Energy fluctuations, but convergence demonstrated: 5 pts
+
+- Compare results: 3 pts
+
+- Correct comparison: 2pts
+
+```
+
+```{admonition} **(c)**
+Plot the equilibrium distribution of particle **speeds** for the
+simulations from part **a** and part **b**. Specify the time interval
+used for the calculation. Compare these results to the Maxwell-Boltzmann
+distribution and calculate the ensemble-average temperature for the
+thermostatted simulation. Explain if your results match your
+expectations. I recommend using a histogram size of
+$0.2 \sigma/\text{unit time}$.
+
+10 points available, click for grading breakdown
+
+- Plot speeds correctly: 5 pts
+
+- Compare results: 3 pts
+
+- Correct comparison: 2pts
+
+```
+
+```{admonition} **(d)**
+Perform MD simulations of Lennard-Jones fluids at $T=1.5$
+$\epsilon/k_B$ and at densities of $\rho = 0.01$, $\rho = 0.10$, and
+$\rho=0.85$ particles/$\sigma^3$ and compare the corresponding radial
+distribution functions. Specify the time interval used for the
+calculation and report the length scale in units of $\sigma$. Comment on
+key features of the plot. I recommend using a histogram size of 0.02
+$\sigma$. Explain system phase behavior based on your results.
+Simulation visualization is also recommended.
+
+5 points available, click for grading breakdown
+
+- Correct RDFs (zero at large r, not showing past L/2): 3 pts
+
+- Compare results for three different densities: 2 pts
+
+```
+
+```{admonition} **(e)**
+Calculate the chemical potential of a Lennard-Jones particle for
+each of the systems considered in part **d**. Recall that the chemical
+potential is the change in the free energy of the system upon addition
+or subtraction of a single particle. Explain the method that you use for
+this calculation. Hint: you do not need to perform any additional
+simulations if you have saved your trajectories.
+
+5 points available, click for grading breakdown
+
+- Explain method: 2 pts
+
+- Attempted:1 pts
+
+- Correct results: 2 pts
+
+```