Skip to main content

ANSYS: Computer-aided engineering software

Contents

  1. Overview of package
  2. Overview of package
    1. General usage
  3. Availability of package by cluster
  4. Licensing
  5. Using fluent with MPI

Overview of package

General information about package
Package: ANSYS
Description: Computer-aided engineering software
For more information: http://www.ansys.com
Categories:
License: Proprietary

General usage information

Ansys is a suite of software for engineering analysis over a range of disciplines, including finite element analysis, structural analysis, and fluid dynamics.

IMPORTANT NOTE: Ansys works with openMPI, use flag: -mpi=openmpi

**************************************************** NOTE **************************************************** This is restrictively licensed software. It is currently being made available on the UMD HPC clusters by the generosity of the Dept of Mechanical Engineering.

Available versions of the package ANSYS, by cluster

This section lists the available versions of the package ANSYSon the different clusters.

Available versions of ANSYS on the Zaratab cluster

Available versions of ANSYS on the Zaratab cluster
Version Module tags CPU(s) optimized for GPU ready?
23.1 ansys/23.1 x86_64 Y
21.2 ansys/21.2 x86_64 Y

Licensing

The ANSYS suite of software is commercially licensed software. It is currently being made available to users of the UMD Deepthought HPC clusters by the generosity of the Dept of Mechanical Engineering.

Using fluent with MPI

To make use of the fluent package within the ANSYS suite in a parallel fashion with a job spanning multiple nodes, you need to provide special arguments to the fluent command. In particular you would want to provide the arguments:

In the -cnf argument, you need to provide the name of a file containing a list of the nodes to use. This can be done using the scontrol show hostnames command; the exact syntax varies depending on the shell you are using. See the examples below, paying attention to the lines with $FLUENTNODEFILE.

If you are using csh or tcsh, something like:

#!/bin/tcsh
#SBATCH --ntasks=64
#SBATCH -t 20:00:00
#SBATCH --mem-per-cpu=6000

module load intel
module load ansys

#Get an unique temporary filename to use for our nodelist
set FLUENTNODEFILE=`mktemp`

#Output the nodes to our nodelist file
scontrol show hostnames > $FLUENTNODEFILE

#Display to us the nodes being used
echo "Running on nodes:"
cat $FLUENTNODEFILE

#Run fluent with the requested number of tasks on the assigned nodes
fluent 3ddp -g -t $SLURM_NTASKS -mpi=intel -ssh -cnf="$FLUENTNODEFILE" -i YOUR_JOU_FILE

#Clean up
rm $FLUENTNODEFILE

If you use the bourne or bourne again shells:

#!/bin/bash
#SBATCH --ntasks=64
#SBATCH -t 20:00:00
#SBATCH --mem-per-cpu=6000

#Get our profile
. ~/.profile

module load intel
module load ansys

#Get an unique temporary filename to use for our nodelist
FLUENTNODEFILE=`mktemp`

#Output the nodes to our nodelist file
scontrol show hostnames > $FLUENTNODEFILE

#Display to us the nodes being used
echo "Running on nodes:"
cat $FLUENTNODEFILE

#Run fluent with the requested number of tasks on the assigned nodes
fluent 3ddp -g -t $SLURM_NTASKS -mpi=intel -ssh -cnf="$FLUENTNODEFILE" -i YOUR_JOU_FILE

#Clean up
rm $FLUENTNODEFILE






Back to Top