The WRF modeling system is comprised of the WRF Preprocessing System (WPS), the WRF model, WRFDA, WRF-Chem, WRF-hydro, and a handful of utility programs. The WPS source code is separate from the other WRF components and must be compiled for real-data cases. The WRF model contains the source code to a Fortran interface to ESMF and the source to FFTPACK.
The WRF model has been successfully ported to a number of Unix-based machines. The WRF developers do not have access to all of them and must rely on outside users and vendors to supply the required configuration information for the compiler and loader options. Below is a list of the supported combinations of hardware and software for the WRF modeling system.
Vendor | Hardware | OS | Compiler |
---|---|---|---|
Cray | XC30 Intel | Linux | Intel |
Cray | XE AMD | Linux | Intel |
IBM | Power Series | AIX | vendor |
IBM | Intel | Linux | Intel/PGI/gfortran |
SGI | IA64/Opteron | Linux | Intel |
COTS* | IA32 | Linux | Intel/PGI/gfortran/g95/PathScale |
COTS | IA64/Opteron | Linux | Intel/PGI/gfortran/PathScale |
Mac | Power Series | Darwin | xlf/g95/PGI/Intel |
Mac | Intel | Darwin | gfortran/PGI/Intel |
NEC | NEC | Linux | vendor |
Fujitsu | FX10 Intel | Linux | vendor |
The WRF model may be built to run on a single-processor machine, a shared-memory machine (that uses the OpenMP API), a distributed memory machine (with the appropriate MPI libraries), or on a distributed cluster (utilizing both OpenMP and MPI).
For in-depth information about the software that controls the WRF build mechanism, see the WRF Software chapter of this Users' Guide.
The WRF modeling system code is mostly written in standard Fortran 90 (and uses a few 2003 capabilities). The software layer, RSL, which sits between WRF and WRFDA, and the MPI interface, is written in C. WPS makes direct calls to the MPI libraries for distributed memory message passing. There are also ancillary programs written in C to perform file parsing and file construction, which are required for default building of the WRF modeling code.
Because of this makeup, a gfortran compiler, gcc, and cpp must be installed to build the WRF code, regardless of whether the code will be built with the gfortran/GNU option, or something else. It is recommended to use a Fortran compiler that supports Fortran2003 standard (version 4.6+). The build mechanism uses several scripting languages, including perl, Cshell and Bourne shell. Several traditional UNIX text/file processing utilities are used, and therefore the following are mandatory.
ar | head | sed |
awk | hostname | sleep |
cat | ln | sort |
cd | ls | tar |
cp | make | touch |
cut | mkdir | tr |
expr | mv | uname |
file | nm | wc |
grep | printf | which |
gzip | rm | m4 |
Note
If any of the following libraries fails to properly build, it will be necessary to contact either a systems administrator at your institution, or a support team for the specific library for help. THE WRF MODEL DEVELOPERS AND SUPPORT TEAM DO NOT HAVE THE RESOURCES TO SUPPORT LIBRARIES ON INDIVIDUAL SYSTEMS!
Scroll down, or click the below links to go to the following sub-sections:
The netCDF package (version 3.6.1+) is the only mandatory library for building the WRF modeling system. NetCDF source code, precompiled binaries, and documentation are available from the Unidata website. To utilize compression capabilities, use netCDF 4.0 or later. Note that compression requires the use of HDF5.
- The entire step-by-step recipe for building the WRF and WPS packages can be found at the How to Compile WRF website. This page includes
- System environment tests
- Steps for installing libraries
- Library compatibility tests
- Steps for building WRF and WPS
- Instructions for downloading static geography data (used for for the WPS geogrid program)
- Instructions for downloading sample real-time data
- To compile WRF system components on a Linux or Darwin system that has access to multiple compilers, link the correct external libraries. For example, do not link the libraries built with PathScale when compiling the WRF components with gfortran. The same options used to build the netCDF libraries must be used when building the WRF code (32 vs 64 bit, assumptions about underscores in the symbol names, etc.).
- If netCDF-4 is used, be sure it is installed without activating parallel I/O based on HDF5. The WRF modeling system can use either the classic data model from netCDF-3 or the compression options supported in netCDF-4. Beginning with V4.4, the ability to write compressed netCDF-4 files in parallel is available. With this option, performance is slower than with pnetcdf, but can be notably faster than the use of regular netCDF on parallel file systems. Compression provides files significantly smaller than pnetcdf generates. It is expected that files sizes will differ with compression.
After installing netCDF, the environment variables PATH and NETCDF should be set so that the model finds the necessary library files during the build. The following are examples, and the actual paths may differ from user to user (if unsure, check with a systems administrator at your institution).
> setenv PATH /usr/local/netcdf/bin:$PATH > setenv NETCDF /usr/local/netcdf
To run distributed memory WRF jobs, an MPI library (for e.g., MPICH or OpenMPI ) are required. Most multi-processor machines come preconfigured with a version of MPI, so it is unlikely that users need to install this package by themselves; however, there are instructions for installing this library available on the How to Compile WRF website. If problems occur, it may be necessary to have a systems administrator at your institution install this library. A working installation of MPI is required prior to a build of WRF using distributed memory. Either MPI-1 or MPI-2 are acceptable. It is possible an MPI library already exists. Issue the following commands, and if path are given, the library is already available.
> which mpif90 > which mpicc > which mpirun
Ensure that the paths are set up to point to the MPI "lib," "include," and "bin" directories. As with the netCDF libraries, MPI must be built consistently with the WRF source code.
- If planning to run real-data simulations with GRIB Edition 2 input data (which is likely), the following libraries are required by the WPS ungrib program, and therefore must be installed prior to configuring WPS.
- zlib
- libpng
- jasper
Note
Users are encouraged to engage their system administrators for the installation of these packages so that traditional library paths and include paths are maintained.
Paths to user-installed compression libraries are handled in the configure.wps file by the "COMPRESSION_LIBS" and "COMPRESSION_INC" variables. The simplest way to ensure all library files are found by the WPS configuration is to install all three in a common directory. For example, if the libraries will be installed in /usr/local, create a library inside /usr/local, called something like grib2. See instructions below each library to ensure they are installed in the correct location.
- JasPer (an implementation of the JPEG2000 standard for "lossy" compression)
- Download the JasPer package and unpack it.
- Go into the unpacked JasPer directory (for e.g.,
cd jasper-1.900.1
) - Issue the following to install (Note: this is following the above example that places all library files in the grib2 directory. This path may vary depending on the system and user preferences)
> ./configure --prefix=/usr/local/grib2 > make > make install
Note
The GRIB2 libraries expect to find include files in "jasper/jasper.h", so it may be necessary to manually create a "jasper" subdirectory in the "include" directory created by the JasPer installation, and manually link header files there.
- PNG (compression library for "lossless" compression)
- Download the PNG package and unpack it.
- Go into the unpacked directory (for e.g.,
cd libpng-1.2.50
) - Issue the following to install (Note: this is following the above example that places all library files in the grib2 directory. This path may vary depending on the system and user preferences)
> ./configure --prefix=/usr/local/grib2 > make > make install
- zlib (a compression library used by the PNG library)
- Go to "The current release is publicly available here" section to download the zlib package, and then unpack it.
- Go into the unpacked directory (for e.g.,
cd zlib-1.2.7
) - Issue the following to install (Note: this is following the above example that places all library files in the grib2 directory. This path may vary depending on the system and user preferences)
> ./configure --prefix=/usr/local/grib2 > make > make install
- Setting UNIX Environment Variables
To ensure the JasPer, PNG, and zlib libraries are able to be located by the ungrib build, some environment variable settings should be issued.
As an alternative to manually editing the COMPRESSION_LIBS and COMPRESSION_INC variables in the configure.wps file, users may set the environment variables "JASPERLIB" and "JASPERINC" to the directories holding the JasPer library and include files before configuring the WPS; for example, if the JasPer libraries were installed in /usr/local/grib2, one might use the following commands (in csh or tcsh).
> setenv JASPERLIB /usr/local/grib2/lib > setenv JASPERINC /usr/local/grib2/include
If zlib and PNG libraries are not in a standard path that will be checked automatically by the compiler, the paths to these libraries can be added on to the JasPer environment variables; for example, if the PNG libraries were installed in /usr/local/libpng-1.2.29 and the zlib libraries were installed in /usr/local/zlib-1.2.3, one might use the following commands after having previously set JASPERLIB and JASPERINC (in csh or tcsh).
> setenv JASPERLIB "${JASPERLIB} -L/usr/local/libpng-1.2.29/lib -L/usr/local/zlib-1.2.3/lib" > setenv JASPERINC "${JASPERINC} -I/usr/local/libpng-1.2.29/include -I/usr/local/zlib-1.2.3/include"
It may also be necessary to set the following (for e.g., in csh or tcsh),
> setenv LDFLAGS -L/usr/local/grib2/lib > setenv CPPFLAGS -I/usr/local/grib2/include
To output WRF model data (wrfout* files) in Grib1 format, a complete source library is included with the software release (provided by Todd Hutchinson); however, when trying to link the WPS, the WRF model, and the WRFDA data streams together, always use the netCDF format.
The WRF code's build mechanism tries to determine the architecture of the computing system, and then presents options to select the preferred build method. For example, if using a Linux machine, it determines whether the machine is 32 or 64 bit, and then prompts for the desired usage of processors (such as serial, shared memory, or distributed memory). From the available compiling options in the build mechanism, only select an option for a compiler that is installed on the system.
The How to Compile WRF website provides the sequence of steps required to build the WRF and WPS codes (though the instructions are specifically given for tcsh and GNU compilers). Alternatively, use the following steps to compile WRF.
- Obtain the WRF system code (that includes WRFDA, WRF-Chem, and WRF-hydro)
- Always get the latest version of the code if you are not continuing a long project, or duplicating previous work. Note that versions prior to V4.0 are no longer supported
- Move to the WRF directory (note that it may be called something else, for e.g., WRFV4.4).
> cd WRF
- Type the following in the command line.
> ./configure
- Select the appropriate compiler and processor usage. Only choose an option for a compiler that is installed on the system.
- serial : computes with a single processor. This is only useful for small cases with domain size of about 100x100 grid spaces.
- smpar : Symmetric Multi-processing/Shared Memory Parallel (OpenMP). This option is only recommended for those who are knowledgeable with computation and processing. It works most reliably for IBM machines.
- dmpar : Distributed Memory Parallel (MPI). This is the recommended option.
- dm+sm : Distributed Memory with Shared Memory (for e.g., MPI across nodes with OpenMP within a node). Performance is typically better with the dmpar-only option, and this option is not recommended for those without extensive computation/processing experience.
- Select the nesting option for the type of simulation desired.
- 0 = no nesting
- 1 = basic nesting (standard, this is the most common choice)
- 2 = nesting with a prescribed set of moves
- 3 = nesting that allows a domain to follow a vortex, specific to tropical cyclone tracking
- Optional configuration options include
./configure -d
: for debugging. This option removes optimization, which is useful when running a debugger (such as gdb or dbx)../configure -D
: for bounds checking and some additional exception handling, plus debugging, with optimization removed. Only PGI, Intel, and gfortran (GNU) compilers have been set up to use this option../configure -r8
: for double-precision. This only works with PGI, Intel, and gfortran (GNU) compilers.
After configuring, there should be a new file in the top-level WRF directory called "configure.wrf."
Type the following in the command line to compile (always send the standard error and output to a log file, using the "&>" syntax. This is useful if the compile fails).
> ./compile em_test_case >& compile.log
|br| where "em_test_case" is the type of case to be built (real-data or specific ideal case). Available options are:
em_real | real-data simulations |
3D idealized cases | |
2D idealized cases | |
em_scm_xy | 1D idealized case |
- For additional information on idealized cases, see "Initialization for Idealized Cases" in the WRF Initialization chapter of this guide.
- Compiling the code should take anywhere from ~10-60 minutes.
Note
Multiple processors can be used to speed up the compiling process. Simply add "-j N" in the compile command, where N is the number of processors (for e.g., ./compile em_real -j 4 >& compile.log. |br| |br| NOTE that testing has shown using more than ~6 processors is not necessary. |br| |br| Also NOTE that if compiling errors occur, it is recommended to compile with only a single processor. The default is to use two processors, so it is necessary to specify using only a single processor if this case arises (e.g., ./compile em_real -j 1 >& compile.log.
After compiling completes, check the end of the compile log to determine whether it was successful. Additionally, if successful the following executables should be present in the wrf/main directory. Type the command
> ls -ls main/*.exe
- For a real-data compile : ndown.exe, real.exe, tc.exe, wrf.exe
- For an indealized compile : ideal.exe, wrf.exe
- These executables are linked to two different directories, and can be run from either location.
- WRF/run
- WRF/test/em_<case> (where <case> is the case chosen in the compile command above)
- If the code fails to compile, open the log file (e.g., compile.log) and search for the word "Error" with a capital "E." Typically the first error listed in the file is the culprit of the failure and all additional errors are a result of the initial problem. This is not always the case if multiple processors were used to compile. If the error is not clear, try recompiling with a single processor (e.g.,
./compile em_real -j 1 >& compile.log
) to ensure the first error listed is the root cause. Make sure to clean and reconfigure the code before recompiling (see bullet below about recompiling.- Many compiling inquiries have been addresss on the WRF & MPAS-A Users' Forum. If unsure how to address the error, try searching for the error on the forum for useful hints.
- To ensure all libraries and compilers are installed correctly, follow the instructions and tests on the How to Compile WRF website before recompiling.
- If the issue has been resolved, before recompiling, clean and configure the code again. |br|
> ./clean -a > ./configure
The top-level WRF directory consists of the following files and sub-directories.
arch | a directory containing files specific to configuration |
chem | a directory containing files specific to building and running WRF-Chem |
clean | a user-executable script to clean the model code prior to recompiling |
compile | a user-executable script to build the WRF model |
configure | a user-executable script to declare configuration settings prior to compiling |
doc | a directory containing various informational documents of specific applications for WRF |
dyn_em | a directory containing files specific to the dynamical core mediation-layer and model-layer subroutines |
external | a directory containing files and sub-directories for building additional external libraries needed for WRF |
frame | a directory containing files related to the WRF software framework-specifc modules |
hydro | a directory containing files specific to building and running WRF-Hydro |
inc | a directory containing various .h libraries, and include (.inc) files generated by the Registry during the WRF compile |
LICENSE.txt | a text file containing WRF licensing information |
main | a directory containing the 'main' WRF programs with symbolic links for executable files in the test/em_* and run/ directories |
Makefile | a file used as input to the UNIX 'make' utility during compiling |
phys | a directory containing WRF model layer routines for physics |
README | a text file containing information about the WRF model version, a public domain notice, and information about releases prior to V4.0 - when code repository information is not available |
README.md | a text file necessary for keeping the code in a .git repository system, and containing important information for users. |
Registry | a directory containing files that control many of the compile-time aspects of the WRF code |
run | a directory contining symbolic links for compiled executables, along with all tables and text files that may be necessary during run-time |
share | a directory containing mediation layer routines, including WRF I/O modules that call the I/O API |
test | a directory containing subdirectories for all the real and idealized cases; inside each of those directories are the same files and executables that are in the 'run' directory |
tools | a directory containing the program that reads the appropriate Registry.X file (for e.g., Registry.EM for a basic WRF compile) and auto-generates files in the 'inc' directory |
var | a directory containing files and subdirectories specific for building and running WRFDA |
wrftladj | a directory containing files specific to building and running WRFPLUS (a program affiliated with WRFDA) |
Information on required libraries specific to WRFDA, WRF-Chem, and WRF-hydro, as well as instructions for compiling can be found from the following links.
- WRF Data Assimilation chapter of this Users' Guide
- WRF Chemistry website
- WRF-Hydro Modeling System website
The WRF Preprocessing System uses a build mechanism similar to that used by the WRF model. External libraries for geogrid and metgrid are limited to those required by the WRF model, since the WPS uses the WRF model's implementations of the WRF I/O API; consequently, WRF must be compiled prior to installation of the WPS so that the I/O API libraries in the WRF external directory will be available to WPS programs.
The only library required to build the WRF model (and WPS) is netCDF; however, the ungrib program requires three compression libraries for GRIB Edition 2 support (if support for GRIB2 data is not needed, ungrib can be compiled without these compression libraries). Where WRF adds a software layer between the model and the communications package, the WPS programs geogrid and metgrid make MPI calls directly. Most multi-processor machines come preconfigured with a version of MPI, so it is unlikely that users need to install this package by themselves. See the :ref:`Required and Optional Libraries section` for additional information.
To get around portability issues, the NCEP GRIB libraries, w3 and g2, have been included in the WPS distribution. The original versions of these libraries are available for download from NCEP at http://www.nco.ncep.noaa.gov/pmb/codes/GRIB2/. The specific tar files to download are g2lib and w3lib. Because the ungrib program requires modules from these files, they are not suitable for usage with a traditional library option during the link stage of the build.
The How to Compile WRF website provides the sequence of steps required to build the WRF and WPS codes (though the instructions are specifically given for tcsh and GNU compilers). Alternatively, use the following steps to compile WPS.
Obtain the WPS code
- Always get the latest version of the code if you are not continuing a long project, or duplicating previous work. Note that versions prior to V4.0 are no longer supported
Move to the WPS directory (note that it may be called something else, for e.g., WPSV4.4).
> cd WPS
Set the WRF_DIR environment variable. The configure script uses this setting to link back to the compiled version of WRF. The following is a Cshell example. The path and name of the WRF directory may vary.
> setenv WRF_DIR ../WRF
Type the following in the command line.
> ./configure
|br|
A list of supported compilers on the current system architecture should be presented, as well as the following options for each.
- serial : Executables are computed with a single processor; this is the recommended option
- serial_NO_GRIB2 : Same as above, but without GRIB2 support (i.e., without compression libraries installed)
- dmpar : Executables are computed with Distributed Memory Parallel (MPI)
- dmpar_NO_GRIB2 : Same as above, but without GRIB2 support (i.e., without compression libraries installed)
Note
Unless domain sizes will be very large (1000's x 1000s of grid spaces), it is almost always recommended to choose a serial option (even if WRF was compiled with a distributed memory or shared memory option). WPS executables run quickly and parallel computing is not typically necessary. If a dmpar option is chosen, note that the ungrib program still needs to be run with a single processor, as there is no support for parallel computing for ungrib.
Choose one of the configure options. After configuring, the file "configure.wps" should exist in the WPS directory.
Type the following in the command line (always send the standard error and output to a log file, using the "&>" syntax. This is useful if the compile fails).
> ./compile >& compile.log
The WPS compile should be relatively quick, compared to compiling WRF. If successful, the following executables should appear in the WPS directory, linked from their corresponding source code directories.
geogrid.exe -> geogrid/src/geogrid.exe |br| ungrib.exe -> ungrib/src/ungrib.exe |br| metgrid.exe -> metgrid/src/metgrid.exe
If the code fails to compile, open the log file (e.g., compile.log) and search for the word "Error" with a capital "E" Typically the first error listed in the file is the culprit of the failure and all additional errors are a result of the initial problem.
geogrid and metgrid Fail
- Make sure WRF compiled successfully.
- WPS geogrid and metgrid executables make use of the external I/O libraries in the WRF/external/ directory - The libraries are built when WRF is installed, and if it was not installed properly, the geogrid and metgrid programs are unable to compile.
Check that the same compiler (and version) are being used to build WPS as were used to build WRF.
Check that the same netCDF (and version) are being used to build WPS as were used to build WRF.
Is the path for WRF_DIR set properly? Check the path and name of the WRF directory
> echo $WRF_DIR
ungrib Fail
- Make sure the jasper, zlib, and libpng libraries are correctly installed (if compiling with GRIB2 support).
- Make sure the correct path is being used for the following lines in "configure.wps."
> COMPRESSION_LIBS = -L/$path-to-ungrib-libraries/lib -ljasper -lpng -lz > COMPRESSION_INC = -I/$path-to-ungrib-libraries/include
It is often necessary to clean the code before recompiling, but not always.
- The code should be cleaned when modifications have been made to the configure.wrf(wps) file, or any changes have been made to a WRF/Registry/* file. If so, issue
./clean -a
prior to recompiling.- Modifications to subroutines within the code require a recompile, but DO NOT require the code to be cleaned, nor reconfigured before recompiling. Simply recompile, which should be much faster than a clean compile.