-
Notifications
You must be signed in to change notification settings - Fork 76
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
72a7ab5
commit e1dfd6d
Showing
1 changed file
with
4 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,9 +4,9 @@ | |
|
||
LAMMPS has numerous built-in potentials for simulations of solid-state, soft matter, and coarse-grained systems. It can be run on a single processor or in parallel using MPI. To learn more, see the [LAMMPS website](https://www.lammps.org/#gsc.tab=0). | ||
|
||
The versions of LAMMPS on Kestrel, Swift, Vermillion at the time of this page being published are 23Jun22 and 02AUG23 versions. The following packages have been installed: asphere, body, bocs, class2, colloid, dielectric, diffraction, dipole, dpd-basic, drude, eff, electrode, extra-fix, extra-pair, fep, granular, h5md, intel, interlayer, kspace, manifold, manybody, mc, meam, misc, molecule, mpiio, openmp, opt, python, phonon, qep, qmmm, reaction, reaxff, replica, rigid, shock, spin, voronoi. | ||
The versions of LAMMPS on Kestrel, Swift, Vermilion can be checked by using "module avail lammps". Usually there are two most recent stable versions are available compiled using different compiler and MPI toolchains. The following packages have been installed: asphere, body, bocs, class2, colloid, dielectric, diffraction, dipole, dpd-basic, drude, eff, electrode, extra-fix, extra-pair, fep, granular, h5md, intel, interlayer, kspace, manifold, manybody, mc, meam, misc, molecule, mpiio, openmp, opt, python, phonon, qep, qmmm, reaction, reaxff, replica, rigid, shock, spin, voronoi. | ||
|
||
If you need other packages, please [contact us](mailto:[email protected]). | ||
If you need other packages or a certain LAMMPS version, please [contact us](mailto:[email protected]). | ||
|
||
## Sample CPU Slurm Script | ||
A sample Slurm script for running LAMMPS on Kestrel CPU nodes is given below: | ||
|
@@ -16,7 +16,7 @@ A sample Slurm script for running LAMMPS on Kestrel CPU nodes is given below: | |
#SBATCH --job-name cpu-test | ||
#SBATCH --nodes=2 #Request two CPU nodes | ||
#SBATCH --time=1:00:00 | ||
#SBATCH --account=<your allocation name> | ||
#SBATCH --account=[your allocation name] | ||
#SBATCH --error=std.err | ||
#SBATCH --output=std.out | ||
#SBATCH --tasks-per-node=104 | ||
|
@@ -35,7 +35,7 @@ $run_cmd $lmp_path -in $name.in >& $name.log | |
where `my_job.in` is the input and `my_job.log` is the output. This runs LAMMPS using two nodes with 208 MPI ranks. | ||
|
||
## Sample GPU Slurm Script | ||
A sample Slurm script for running LAMMPS on Kestrel CPU nodes is given below: | ||
A sample Slurm script for running LAMMPS on Kestrel GPU nodes is given below: | ||
|
||
``` | ||
#!/bin/bash | ||
|