Namd
Contents
Summary and Version Information
Package | Namd |
---|---|
Description | namd: Scalable molecular dynamics |
Categories | Biology, Chemistry, Research, Science |
Version | Module tag | Availability* | GPU Ready |
Notes |
---|---|---|---|---|
2.6 | namd/2.6 | Non-HPC Glue systems Deepthought HPCC 64bit-Linux |
N | |
2.7b1 | namd/2.7b1 | Non-HPC Glue systems Deepthought HPCC 64bit-Linux |
N | |
2.8 | namd/2.8 | Non-HPC Glue systems Deepthought HPCC 64bit-Linux |
N | |
2.9 | namd/2.9/cuda | Non-HPC Glue systems Deepthought HPCC Deepthought2 HPCC 64bit-Linux |
Y | OpenMPI enabled Cuda enabled |
2.9 | namd/2.9/nocuda | Non-HPC Glue systems Deepthought HPCC Deepthought2 HPCC 64bit-Linux |
N | OpenMPI enabled |
2.9-PACE | namd/2.9-PACE/nocuda | Non-HPC Glue systems Deepthought HPCC Deepthought2 HPCC 64bit-Linux |
N | OpenMPI enabled Patched to support PACE force field |
2.10 | namd/2.10/cuda | Non-HPC Glue systems Deepthought2 HPCC 64bit-Linux |
Y | OpenMPI enabled Cuda enabled |
2.10 | namd/2.10/nocuda | Non-HPC Glue systems Deepthought2 HPCC 64bit-Linux |
N | OpenMPI enabled |
Notes:
*: Packages labelled as "available" on an HPC cluster means
that it can be used on the compute nodes of that cluster. Even software
not listed as available on an HPC cluster is generally available on the
login nodes of the cluster (assuming it is available for the
appropriate OS version; e.g. RedHat Linux 6 for the two Deepthought clusters).
This is due to the fact that the compute nodes do not use AFS and so have
copies of the AFS software tree, and so we only install packages as requested.
Contact us if you need a version
listed as not available on one of the clusters.
In general, you need to prepare your Unix environment to be able to use this software. To do this, either:
tap TAPFOO
module load MODFOO
where TAPFOO and MODFOO are one of the tags in the tap
and module columns above, respectively. The tap
command will
print a short usage text (use -q
to supress this, this is needed
in startup dot files); you can get a similar text with
module help MODFOO
. For more information on
the tap and module commands.
For packages which are libraries which other codes get built against, see the section on compiling codes for more help.
Tap/module commands listed with a version of current will set up for what we considered the most current stable and tested version of the package installed on the system. The exact version is subject to change with little if any notice, and might be platform dependent. Versions labelled new would represent a newer version of the package which is still being tested by users; if stability is not a primary concern you are encouraged to use it. Those with versions listed as old set up for an older version of the package; you should only use this if the newer versions are causing issues. Old versions may be dropped after a while. Again, the exact versions are subject to change with little if any notice.
In general, you can abbreviate the module tags. If no version is given, the default current version is used. For packages with compiler/MPI/etc dependencies, if a compiler module or MPI library was previously loaded, it will try to load the correct build of the package for those packages. If you specify the compiler/MPI dependency, it will attempt to load the compiler/MPI library for you if needed.
Please note that versions 2.8 and under are not MPI or GPU aware, hence
you MUST use charmrun
to run namd2 if you are running an HPC
job that will span nodes.
Versions 2.9 and higher are built with an MPI aware charm++
,
hence should be started like a standard MPI program using mpirun
.
A GPU-aware version is also available, and will ONLY run on nodes with a GPU.
You must explicitly select it; module load namd/2.9
will ALWAYS
return the non-CUDA/non-GPU enabled version, even if you have previously
loaded a cuda module. GPU-enabled versions of namd2 will not work on nodes
without a GPU.
Usage guidelines and hints
The following guidelines and hints only cover the mechanics of starting the namd2 program, especially on the High Performance Computing clusters. Unfortunately, the Division of Information Technology does not have the expertise to assist in the assembling the inputs, etc. to the NAMD program.
Using Older Versions (versions 2.8 or older)
|
Although we will continue to maintain these older versions of NAMD on
the original Deepthought cluster for a while, they are getting long in
the tooth and we encourage users to migrate one of the newer versions.
These older versions are NOT supported on Deepthought2.
|
For the older Division of IT maintained installations NAMD,
versions 2.8 or older, the NAMD program is NOT built with support for MPI,
and so you must use charmrun
to run NAMD with
multiprocess/multinode support. The following example runs a simple
example case over 12 processor cores:
#!/bin/bash
#SBATCH -n 12
#SBATCH -A test-hi
#SBATCH -t 1:00
. ~/.profile
SHELL=bash
#Use charmrun for NAMD versions <= 2.8 on DIT systems
NAMD_VERSION=2.8
module load namd/$NAMD_VERSION
NAMD2=`which namd2`
WORKDIR=/export/lustre_1/payerle/namd/tests/alanin
cd $WORKDIR
charmrun ++remote-shell ssh +p12 $NAMD2 alanin
This is quick example, so the time limit is only set to 1 minute (which is much longer than needed to complete the job) --- you will likely need to extend the time in your runs. And of course change the allocation to charge against.
Note: You must give the ++remote-shell ssh
arguments to charmrun
, otherwise it will attempt to use
rsh
and fail. The +p12
argument tells
charmrun
to run the job over 12 cores; this again should be
changed (and should match the #SBATCH -n 12
line) for the
actual number of cores you intend to use.
Using newer Versions, on CPUs (versions 2.9 or higher)
Starting with version 2.9, the Division of IT builds of charm++ (and thus
NAMD) are MPI aware. Thus for these versions, you should not be using the
charmrun
command to start NAMD but instead use the standard
mpirun
command, as in the example below.
#!/bin/bash
#SBATCH -n 12
#SBATCH -A test-hi
#SBATCH -t 1:00
. ~/.profile
SHELL=bash
#for newer NAMD builds, version 2.9 and higher, non-cuda
NAMD_VERSION=2.9
module load namd/$NAMD_VERSION
WORKDIR=/lustre_1/payerle/namd/tests/alanin
cd $WORKDIR
mpirun namd2 alanin
Again, this is a short example, so time limit is only 1 minute. You will likely need to increase this, and change the account to charge against.
Note that we run using mpirun to distribute the job among the requested cores. OpenMPI is Slurm-aware, so it will automatically distribute the namd2 code over the requested processor cores.
Using newer Versions, on GPUs (versions 2.9 or higher)
The Deepthought2 HPC has a significant number of nodes with
Note here that we ask for a single node ( There are way more cores on even a single GPU than are
needed for this simple case, so we only request one node. The GPU-enabled
NAMD is also MPI aware, so we should be able to run on multiple nodes. (If
someone succeeds in doing such, please send us your submit script to add
to this page.)
#!/bin/bash
#SBATCH -N 1
#SBATCH -A test-hi
#SBATCH -t 1:00
#SBATCH --gres=gpu
. ~/.profile
SHELL=bash
#for newer NAMD builds, version 2.9 and higher, cuda enabled
NAMD_VERSION=2.9
module load namd/$NAMD_VERSION/cuda
NAMD2=`which namd2`
WORKDIR=/lustre/payerle/test/namd/alanin
cd $WORKDIR
namd2 alanin
#SBATCH -N 1
) and
for a node with GPUs (#SBATCH --gres=gpu
), and that we load
the GPU-enabled version of NAMD
(module load namd/$NAMD_VERSION/cuda
).