Ansys

Contents

Summary and Version Information

Package Ansys
Description Ansys Fluent: Computer-aided engineering software
Categories Engineering,   Numerical Analysis,   Research
Version Module tag Availability* GPU
Ready
Notes
17.1 ansys/17.1 Non-HPC Glue systems
Deepthought2 HPCC
64bit-Linux
N Restrictively Licensed
17.2 ansys/17.2 Non-HPC Glue systems
Deepthought2 HPCC
64bit-Linux
N Restrictively Licensed
18.2 ansys/18.2 Non-HPC Glue systems
Deepthought2 HPCC
64bit-Linux
N Restrictively Licensed

Notes:
*: Packages labelled as "available" on an HPC cluster means that it can be used on the compute nodes of that cluster. Even software not listed as available on an HPC cluster is generally available on the login nodes of the cluster (assuming it is available for the appropriate OS version; e.g. RedHat Linux 6 for the two Deepthought clusters). This is due to the fact that the compute nodes do not use AFS and so have copies of the AFS software tree, and so we only install packages as requested. Contact us if you need a version listed as not available on one of the clusters.

In general, you need to prepare your Unix environment to be able to use this software. To do this, either:

  • tap TAPFOO
OR
  • module load MODFOO

where TAPFOO and MODFOO are one of the tags in the tap and module columns above, respectively. The tap command will print a short usage text (use -q to supress this, this is needed in startup dot files); you can get a similar text with module help MODFOO. For more information on the tap and module commands.

For packages which are libraries which other codes get built against, see the section on compiling codes for more help.

Tap/module commands listed with a version of current will set up for what we considered the most current stable and tested version of the package installed on the system. The exact version is subject to change with little if any notice, and might be platform dependent. Versions labelled new would represent a newer version of the package which is still being tested by users; if stability is not a primary concern you are encouraged to use it. Those with versions listed as old set up for an older version of the package; you should only use this if the newer versions are causing issues. Old versions may be dropped after a while. Again, the exact versions are subject to change with little if any notice.

In general, you can abbreviate the module tags. If no version is given, the default current version is used. For packages with compiler/MPI/etc dependencies, if a compiler module or MPI library was previously loaded, it will try to load the correct build of the package for those packages. If you specify the compiler/MPI dependency, it will attempt to load the compiler/MPI library for you if needed.

Licensing

The ANSYS suite of software is commercially licensed software. It is currently being made available to users of the UMD Deepthought HPC clusters by the generosity of the Dept of Mechanical Engineering.

Using fluent with MPI

To make use of the fluent package within the ANSYS suite in a parallel fashion with a job spanning multiple nodes, you need to provide special arguments to the fluent command. In particular you would want to provide the arguments:

  • -g : to instruct fluent NOT to use a graphical environment
  • -t $SLURM_NTASKS : to start the number of tasks requested of Slurm
  • -mpi=intel : to ensure the correct MPI libraries are used
  • -ssh : to use ssh instead of rsh to connect to nodes
  • -cnf=NODEFILE : to tell fluent which nodes to use

In the -cnf argument, you need to provide the name of a file containing a list of the nodes to use. This can be done using the scontrol show hostnames command; the exact syntax varies depending on the shell you are using. See the examples below, paying attention to the lines with $FLUENTNODEFILE.

If you are using csh or tcsh, something like:

#!/bin/tcsh
#SBATCH --ntasks=64
#SBATCH -t 20:00:00
#SBATCH --mem-per-cpu=6000

module load intel
module load ansys

#Get an unique temporary filename to use for our nodelist
set FLUENTNODEFILE=`mktemp`

#Output the nodes to our nodelist file
scontrol show hostnames > $FLUENTNODEFILE

#Display to us the nodes being used
echo "Running on nodes:"
cat $FLUENTNODEFILE

#Run fluent with the requested number of tasks on the assigned nodes
fluent 3ddp -g -t $SLURM_NTASKS -mpi=intel -ssh -cnf="$FLUENTNODEFILE" -i YOUR_JOU_FILE

#Clean up
rm $FLUENTNODEFILE

If you use the bourne or bourne again shells:

#!/bin/bash
#SBATCH --ntasks=64
#SBATCH -t 20:00:00
#SBATCH --mem-per-cpu=6000

#Get our profile
. ~/.profile

module load intel
module load ansys

#Get an unique temporary filename to use for our nodelist
FLUENTNODEFILE=`mktemp`

#Output the nodes to our nodelist file
scontrol show hostnames > $FLUENTNODEFILE

#Display to us the nodes being used
echo "Running on nodes:"
cat $FLUENTNODEFILE

#Run fluent with the requested number of tasks on the assigned nodes
fluent 3ddp -g -t $SLURM_NTASKS -mpi=intel -ssh -cnf="$FLUENTNODEFILE" -i YOUR_JOU_FILE

#Clean up
rm $FLUENTNODEFILE