Skip to content


AMR-Wind on Kestrel#

AMR-Wind is a massively parallel, block-structured adaptive-mesh, incompressible flow solver for wind turbine and wind farm simulations. The primary applications for AMR-Wind are: performing large-eddy simulations (LES) of atmospheric boundary layer (ABL) flows, simulating wind farm turbine-wake interactions using actuator disk or actuator line models for turbines, and as a background solver when coupled with a near-body solver (e.g., Nalu-Wind) with overset methodology to perform blade-resolved simulations of multiple wind turbines within a wind farm. For more information see the AMR-Wind documentation.

Installation of AMR-Wind on GPU nodes#

AMR-wind can be installed by following the instructions here . On Kestrel GPU nodes, this can be achieved by first loading the following modules:

module restore 
source /nopt/nrel/apps/gpu_stack/
ml gcc
ml PrgEnv-nvhpc
ml nvhpc/24.1
ml cray-libsci/
ml cmake/3.27.9
ml python/3.9.13

Make sure the following modules are loaded using module list.

1.libfabric/ 2.craype-x86-genoa 3.curl/8.6.0
6.python/3.9.13 7.cray-dsmml/0.2.2 8.cray-libsci/ 9.gcc/10.1.0 10.craype-network-ofi
11.nvhpc/24.1 12.cmake/3.27.9 13.libxml2/2.10.3 14.gettext/0.22.4 15.craype/2.7.30 16.cray-mpich/8.1.28 17.PrgEnv-nvhpc/8.5.0

You can clone the latest version of AMR-wind from here. Once cloned, cd into the AMR directory and create a build folder.

You can create a file with the cmake instructions,

vim conf_instructions

and copy the content below.

    -DMPI_CXX_COMPILER=/opt/cray/pe/mpich/8.1.28/ofi/nvidia/23.3/bin/mpicxx \
    -DMPI_C_COMPILER=/opt/cray/pe/mpich/8.1.28/ofi/nvidia/23.3/bin/mpicc \
    -DMPI_Fortran_COMPILER=/opt/cray/pe/mpich/8.1.28/ofi/nvidia/23.3/bin/mpifort \
    -DCMAKE_BUILD_TYPE=Release \

You can execute the file using

bash conf_instructions

once the cmake step is done, you can

make -j 


make install -j 

You should now have a successful installation of AMR-Wind.

At runtime, make sure to follow this sequence of module loads.

module restore 
source /nopt/nrel/apps/gpu_stack/
ml PrgEnv-nvhpc
ml cray-libsci/

Running on the GPUs using modules#

NREL makes available different modules for using AMR-Wind for CPUs and GPUs for different toolchains. The build instructions are given below for users wanting to build their own version. It is recommended that AMR-Wind be run on GPU nodes for obtaining the most optimal performance.

Here is a sample script for submitting an AMR-Wind application run on multiple GPU nodes, with the user's input file and mesh grid in the current working directory.

Sample job script: Kestrel - Full GPU node
#SBATCH --time=1:00:00 
#SBATCH --partition=gpu-h100
#SBATCH --account=<user-account>
#SBATCH --nodes=2
#SBATCH --gres=gpu:h100:4
#SBATCH --exclusive

module restore 
source /nopt/nrel/apps/gpu_stack/
module load PrgEnv-nvhpc
module load amr-wind/main-craympich-nvhpc

srun -K1 -n 16 --gpus-per-node=4 amr_wind abl_godunov-512.i >& ablGodunov-512.log