Skip to content

Commit

Permalink
Update lammps.md
Browse files Browse the repository at this point in the history
  • Loading branch information
sinolonghai authored Dec 16, 2024
1 parent 72a7ab5 commit e1dfd6d
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions docs/Documentation/Applications/lammps.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@

LAMMPS has numerous built-in potentials for simulations of solid-state, soft matter, and coarse-grained systems. It can be run on a single processor or in parallel using MPI. To learn more, see the [LAMMPS website](https://www.lammps.org/#gsc.tab=0).

The versions of LAMMPS on Kestrel, Swift, Vermillion at the time of this page being published are 23Jun22 and 02AUG23 versions. The following packages have been installed: asphere, body, bocs, class2, colloid, dielectric, diffraction, dipole, dpd-basic, drude, eff, electrode, extra-fix, extra-pair, fep, granular, h5md, intel, interlayer, kspace, manifold, manybody, mc, meam, misc, molecule, mpiio, openmp, opt, python, phonon, qep, qmmm, reaction, reaxff, replica, rigid, shock, spin, voronoi.
The versions of LAMMPS on Kestrel, Swift, Vermilion can be checked by using "module avail lammps". Usually there are two most recent stable versions are available compiled using different compiler and MPI toolchains. The following packages have been installed: asphere, body, bocs, class2, colloid, dielectric, diffraction, dipole, dpd-basic, drude, eff, electrode, extra-fix, extra-pair, fep, granular, h5md, intel, interlayer, kspace, manifold, manybody, mc, meam, misc, molecule, mpiio, openmp, opt, python, phonon, qep, qmmm, reaction, reaxff, replica, rigid, shock, spin, voronoi.

If you need other packages, please [contact us](mailto:[email protected]).
If you need other packages or a certain LAMMPS version, please [contact us](mailto:[email protected]).

## Sample CPU Slurm Script
A sample Slurm script for running LAMMPS on Kestrel CPU nodes is given below:
Expand All @@ -16,7 +16,7 @@ A sample Slurm script for running LAMMPS on Kestrel CPU nodes is given below:
#SBATCH --job-name cpu-test
#SBATCH --nodes=2 #Request two CPU nodes
#SBATCH --time=1:00:00
#SBATCH --account=<your allocation name>
#SBATCH --account=[your allocation name]
#SBATCH --error=std.err
#SBATCH --output=std.out
#SBATCH --tasks-per-node=104
Expand All @@ -35,7 +35,7 @@ $run_cmd $lmp_path -in $name.in >& $name.log
where `my_job.in` is the input and `my_job.log` is the output. This runs LAMMPS using two nodes with 208 MPI ranks.

## Sample GPU Slurm Script
A sample Slurm script for running LAMMPS on Kestrel CPU nodes is given below:
A sample Slurm script for running LAMMPS on Kestrel GPU nodes is given below:

```
#!/bin/bash
Expand Down

0 comments on commit e1dfd6d

Please sign in to comment.