Package: | ANSYS |
---|---|
Description: | Computer-aided engineering software |
For more information: | http://www.ansys.com |
Categories: | |
License: | Proprietary |
Ansys is a suite of software for engineering analysis over a range of disciplines, including finite element analysis, structural analysis, and fluid dynamics.
IMPORTANT NOTE: Ansys works with openMPI, use flag: -mpi=openmpi
**************************************************** NOTE **************************************************** This is restrictively licensed software. It is currently being made available on the UMD HPC clusters by the generosity of the Dept of Mechanical Engineering.
This section lists the available versions of the package ANSYSon the different clusters.
Version | Module tags | CPU(s) optimized for | GPU ready? |
---|---|---|---|
23.1 | ansys/23.1 | x86_64 | Y |
21.2 | ansys/21.2 | x86_64 | Y |
The ANSYS suite of software is commercially licensed software. It is currently being made available to users of the UMD Deepthought HPC clusters by the generosity of the Dept of Mechanical Engineering.
To make use of the fluent
package within the ANSYS suite
in a parallel fashion with a job spanning multiple nodes, you need to
provide special arguments to the fluent
command. In particular
you would want to provide the arguments:
-g
: to instruct fluent NOT to use a graphical environment-t $SLURM_NTASKS
: to start the number of tasks requested of Slurm-mpi=intel
: to ensure the correct MPI libraries are used-ssh
: to use ssh instead of rsh to connect to nodes-cnf=NODEFILE
: to tell fluent which nodes to useIn the -cnf
argument, you need to provide the name of a file
containing a list of the nodes to use. This can be done using the
scontrol show hostnames
command; the exact syntax varies depending
on the shell you are using. See the examples below, paying attention to the
lines with $FLUENTNODEFILE
.
If you are using csh or tcsh, something like:
#!/bin/tcsh
#SBATCH --ntasks=64
#SBATCH -t 20:00:00
#SBATCH --mem-per-cpu=6000
module load intel
module load ansys
#Get an unique temporary filename to use for our nodelist
set FLUENTNODEFILE=`mktemp`
#Output the nodes to our nodelist file
scontrol show hostnames > $FLUENTNODEFILE
#Display to us the nodes being used
echo "Running on nodes:"
cat $FLUENTNODEFILE
#Run fluent with the requested number of tasks on the assigned nodes
fluent 3ddp -g -t $SLURM_NTASKS -mpi=intel -ssh -cnf="$FLUENTNODEFILE" -i YOUR_JOU_FILE
#Clean up
rm $FLUENTNODEFILE
If you use the bourne or bourne again shells:
#!/bin/bash
#SBATCH --ntasks=64
#SBATCH -t 20:00:00
#SBATCH --mem-per-cpu=6000
#Get our profile
. ~/.profile
module load intel
module load ansys
#Get an unique temporary filename to use for our nodelist
FLUENTNODEFILE=`mktemp`
#Output the nodes to our nodelist file
scontrol show hostnames > $FLUENTNODEFILE
#Display to us the nodes being used
echo "Running on nodes:"
cat $FLUENTNODEFILE
#Run fluent with the requested number of tasks on the assigned nodes
fluent 3ddp -g -t $SLURM_NTASKS -mpi=intel -ssh -cnf="$FLUENTNODEFILE" -i YOUR_JOU_FILE
#Clean up
rm $FLUENTNODEFILE