Minnesota Supercomputing Institute
2.7-ompi
2.12-ibverbs-smp-CUDA, 2.12-multicore-CUDA, 2.13-ibverbs-smp-CUDA, 2.10-ibverbs, 2.11-ibverbs, 2.12-ibverbs, 2.13-ompi, 2.14-libverbs-CUDA, 2.14-ompi, 2.7-ompi, 2.8-ompi, 2.9-impi, 2.9-libverbs-CUDA, 2.9b1-libverbs, 3.0-alpha13-netlrts-smp-CUDA
Tuesday, August 29, 2023
2.10-ibverbs, 2.11-ibverbs, 2.12-ibverbs, 2.13-ompi, 2.14-libverbs-CUDA, 2.14-ompi, 2.7-ompi, 2.8-ompi, 2.9-impi, 2.9-libverbs-CUDA, 2.9b1-libverbs, 3.0-alpha13-netlrts-smp-CUDA
NAMD, recipient of a 2002 Gordon Bell Award, is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of processors on high-end parallel platforms. NAMD is best run on the high performance systems where it has been compiled with the latest Intel compilers and recent MPI parallel libraries. There are different compiled versions of NAMD, which use different parallelization methods.
The following is an example job script for running an OpenMPI-based NAMD module under SLURM:
!/bin/bash -l #SBATCH -t 24:00:00 #SBATCH --mem-per-cpu=2gb #SBATCH -N=8 #SBATCH --ntasks-per-node=32 cd $SLURM_SUBMIT_DIR module load namd/2.14-ompi mpirun namd2 stmv.namd > stmv.out
#!/bin/bash -l #SBATCH -t 24:00:00 #SBATCH --mem-per-cpu=2gb #SBATCH -N=4 #SBATCH --ntasks-per-node=8 module load namd/2.12-ibverbs cd $SLURM_SUBMIT_DIR # Create a formatted list of nodes rm namd.hostfile HOSTNAMES=$(scontrol show hostnames) while IFS= read -r HOST; do echo "host ${HOST}" >> namd.hostfile; done <<< "$HOSTNAMES" # Create a runscript for loading the namd module # on each node cat > runscript << EOL #!/bin/bash module load namd/2.12-ibverbs \$* EOL chmod +x runscript # Please note there is no line break # in the following command. `which charmrun` ++runscript ./runscript `which namd2` \ ++nodelist namd.hostfile ++p $SLURM_NPROCS stmv.namd > stmv.out
One of the things that using IBVerbs allows us to do is to run NAMD on GPUs, which can offer significant performance improvements over purely-CPU-based calculations in many cases. An example of a job script for running NAMD on MSI's k40 GPU nodes can be found in the k40 documentation tab on this page.
#!/bin/bash -l #SBATCH --time=2:00:00 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=24 #SBATCH --mem-per-cpu=2gb #SBATCH -p k40 #SBATCH --gres=gpu:k40:2 module load namd/2.14-libverbs-CUDA # Create a formatted list of nodes HOSTNAMES=$(scontrol show hostnames) while IFS= read -r HOST; do echo "host ${HOST}" >> namd.hostfile; done <<< "$HOSTNAMES" # Create a runscript for loading the namd module # on each node cat > runscript << EOL #!/bin/bash module load namd/2.12-ibverbs \$* EOL chmod +x runscript let NCPUS=$SLURM_CPUS_ON_NODE-1
`which charmrun` ++runscript ./runscript `which namd2` \++nodelist namd.hostfile ++n $SLURM_JOB_NUM_NODES ++ppn $NCPUS \+idlepoll +devices 0,1 stmv.namd > stmv.out