Ansys Fluent

From arccwiki
Jump to: navigation, search

Ansys/Fluent

  • Description: ANSYS is a general purpose software, used to simulate interactions of all disciplines of physics, structural, vibration, fluid dynamics, heat transfer and electromagnetic for engineers.
  • Vendor: Ansys
  • Application: Fluent
  • Version: 2019R1
  • Website: http://ansys.com
  • License: floating license with limited features supported.

Module Load Command

Use the following command to setup access to the Ansys/Fluent software:

module load ansys/fluent

Creating a Fluent Journal File

The journal file is the definition file given to the fluent command. It is written in a dialect of Lisp called Scheme and contains all the instructions that are to be executed during the run. A basic form of this file, is as follows:

# -----------------------------------------------------------
# SAMPLE JOURNAL FILE
#
# read case file (*.cas.gz) that had previously been prepared
file/read-case mycase.cas.gz
# initialize flowfield
solve/initialize/initialize-flow
# run 100 iterations
solve/iterate 100
# write data
file/write-data mycase.dat.gz
# exit fluent
exit
#------------------------------------------------------------

Save the journal file to a file of your choice. In this simple example, we read in the maycase.cas.gz file, which we had previously prepared (includes all the boundary conditions and models). We then initialized the flowfield in order to start the iterations and we solved for 100 iterations. After 100 iterations had been performed, the data generated was written to the file mycase.dat.gz. We are assuming here that 100 iterations will suffice for a converged solution.

A script for running Ansys/Fluent on Teton is shown below.

Running Ansys/Fluent

Do not run Ansys/Fluent on a login node it will be killed.

Ansys/Fluent should be run from the /gscratch directory. The standard output, standard error, and by default all generated data from the job will be placed in the directory where the sbatch command was issued. Running Ansys/Fluent from the /home directory can quickly exhaust a users' home quota.

Interactive Ansys/Fluent

NOTE: This mode is best for testing you journal file.

You can run Ansys/Fluent from an interactive Slurm job as shown below. The following command starts an interactive job with the specified resources.

srun -A <Your project name> -p <Desire partitions> -t <Desired run time> --nodes=2 --ntasks-per-node=32 --pty /bin/bash

Change the above command to fit your needs. Once executed you will be placed on the head node of your job. You can then run the Ansys/Fluent as shown.

module load ansys/fluent

Run the Ansys/Fluent command as shown below, this will establish a connections to the rest of your compute nodes and drop you into the Ansys/Fluent interactive mode.

fluent <model> -slurm -platform=intel -mpi=ibmmpi -t$SLURM_NTASKS -tm$SLURM_NTASKS -ssh -g -pib

Not that without the "-i" option you are placed into an interactive fluent session. This will allow you to enter fluent commands and have them executed immediately.

Batch Ansys/Fluent

Here is an example batch script. For more information on submitting batch scripts, see the documentation on running jobs.

To run Ansys/Fluent in batch mode using an Ansys/Fluent, create a text file called fluent.sbatch or something to your liking containing:

#!/bin/bash
### This is a general SLURM script. You'll need to make modifications for this to 
### work with the ansus/fluent software. Remember that the .bashrc 
### file will get executed on each node upon login and any settings in this script
### will be in addition to, or will override, the system bashrc file settings. Users will
### find it advantageous to use only the specific modules they want or 
### specify a certain PATH environment variable, etc. If you have questions,
### please contact the ARCC at arcc-info@uwyo.edu for help.
### Informational text is usually indicated by "###". Don't uncomment these lines.
### Lines beginning with "#SBATCH" are SLURM directives. They tell SLURM what to do. ### For example, #SBATCH --job-name my_job tells SLURM that the name of the job is "my_job". ### Don't remove the "#SBATCH".
### Job Name #SBATCH --job-name=<your job name>
### Set the partition to select compute nodes from ### This is normally moran,teton or teton,moran depending on node requirements. #SBATCH --partition=<Your partition selection>
### Declare an account for the job to run under #SBATCH --account=<your project name>
### Standard output stream files have a default name of: ### "slurm_<jobid>.out" However, this can be changed using the options ### below. If you would like stdout and stderr to be combined, ### omit the "SBATCH -e" option below. ###SBATCH -o stdout_file ###SBATCH -e stderr_file
### mailing options #SBATCH --mail-type=BEGIN,END,FAIL #SBATCH --mail-user=<your email address>
### Specify Resources ### 2 nodes, 16 processors (cores) each node. Assumes the older 16 core nodes, i.e. Moran. ### for Teton nodes you should set "ntasks-per-node" to 32. #SBATCH --nodes=2 #SBATCH --ntasks-per-node=16
### Set max walltime (days-hours:minutes:seconds) #SBATCH --time=0-01:00:00
### Load needed modules module load ansys/fluent
### Change to the /gscratch directory cd /gscratch/<your userid>
### Command normally given on command line fluent <model> -slurm -platform=intel -mpi=ibmmpi -t$SLURM_NTASKS -tm$SLURM_NTASKS -ssh -g -pib -i <journal file>

Note that you must modify any information between the "< ... >" entries.

The "<jpurnal file>" should be replced with the file name of your journal file as outlined above. If you intend to use the Moran nodes you will need to remove the "-platform=intel" option.

Back to HPC Installed Software