Skip to content

OpenFOAM#

OpenFOAM Installation#

Building OpenFOAM with cray-mpich and gcc#

Instructions for installing OpenFOAM are available here.

In the instructions, you will be cloning the OpenFOAM folder which we will refer to as $OPENFOAM.

In order to build OpenFOAM with cray-mpich, two files need to be edited.

  1. $OPENFOAM/etc/bashrc

    In this file, the variable WM_MPLIB will be defined as MPICH. Search for the line where the variable is exported and replace it with

    export WM_MPLIB=MPICH
    
  2. $OPENFOAM/etc/config.sh/mpi

    This file defines where mpich is defined on the system. You will search for the mpich definition block and replace it with

    export MPI_ARCH_PATH=/opt/cray/pe/mpich/8.1.28/ofi/gnu/10.3
    export LD_LIBRARY_PATH="${MPI_ARCH_PATH}/lib:${LD_LIBRARY_PATH}"
    export PATH="${MPI_ARCH_PATH}/bin:${PATH}"
    export FOAM_MPI=mpich-8.1.28
    export MPI_HOME=/opt/cray/pe/mpich/8.1.28/ofi/gnu/10.3
    #export FOAM_MPI=mpich2-1.1.1p1
    #export MPI_HOME=$WM_THIRD_PARTY_DIR/$FOAM_MPI
    #export MPI_ARCH_PATH=$WM_THIRD_PARTY_DIR/platforms/$WM_ARCH$WM_COMPILER/$FOAM_MPI
    
    
    _foamAddPath    $MPI_ARCH_PATH/bin
    
    
    # 64-bit on OpenSuSE 12.1 uses lib64 others use lib
    _foamAddLib     $MPI_ARCH_PATH/lib$WM_COMPILER_LIB_ARCH
    _foamAddLib     $MPI_ARCH_PATH/lib
    
    
    _foamAddMan     $MPI_ARCH_PATH/share/man
    ;;
    

Before you install OpenFOAM, make sure to load Prgenv-gnu. This will load gcc and cray-mpich. Make sure the same module is loaded at runtime.

Running OpenFOAM cases using Modules#

There are several modules for builds of OpenFOAM. After logging in to a CPU node on Kestrel, please use the module avail command to view available versions.

CPU $ module avail openfoam
----------------------------- /nopt/nrel/apps/cpu_stack/modules/default/application -----------------------------
   openfoam/v2306-openmpi-gcc      openfoam/9-craympich (D)    openfoam/11-craympich
   openfoam/v2406-craympich-gcc    openfoam/9-ompi             openfoam/12-intelmpi

We encourage users to switch to newer versions and perform strong & weak scaling tests for their setup when submitting a new large job. According to some user reports, the application has shown to scale very poorly beyond 2 nodes.

Sample job script: Kestrel
#!/bin/bash
#SBATCH --job-name=myOpenFOAMjob
#SBATCH --account=<your-account-name>
#SBATCH --output=foamOutputLog.out
#SBATCH --error=foamErrorLog.out
#SBATCH --mail-user=<yourEmailAddress>@nrel.gov 
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=104 # set number of MPI ranks per node
#SBATCH --cpus-per-task=1 # set number of OpenMP threads per MPI rank
#SBATCH --time=04:00:00


module load openfoam/<version>

decomposePar

srun -n 200 rhoReactingBuoyantFoam -parallel >> log.h2

reconstructPar -time 0:5000  -fields '(H2 X_H2)'

Installing additional OpenFOAM packages#

Additional packages built on top of the OpenFOAM API can be installed after loading a compatible module. As an example, we show the process to install the OpenFuelCell2 package.

# Download or clone the required package
$ git clone https://github.com/openFuelCell2/openFuelCell2.git

$ cd openFuelCell2

# Request an interactive node for compiling in parallel
$ salloc --account=<your-account-name> --time=00:30:00 --nodes=1 --ntasks-per-core=1 --ntasks-per-node=104 --cpus-per-task=1 --partition=debug

# Load the module compatible with your package
$ module load openfoam/v2306-openmpi-gcc

# Compile the application with the official instructions from the developers, e.g.
$ cd src
$ ./Allwmake  -j -prefix=${PWD}

# Test
$ cd ../run/SOFC/
$ make mesh
$ export NPROCS=4
$ make decompose
$ make parallel
$ make run