上一篇文章,我借鉴Alan大佬的文章,用了两套编译器gcc和icc,分别编译WRF与CMAQ,后来因为要用2019年气象数据,据说新数据要用4.0版本才可以跑新数据,故尝试用icc编译WRF4.0与WPS4.0,竟然成功了,这里记录一下遇到的问题及解决办法。
注:环境设置见前文icc环境及CMAQ环境,此处不再赘述
1.Building WRF
The tar file can be found at:
http://www2.mmm.ucar.edu/wrf/src/WRFV4.0.TAR.gz
4.0版本可以运行新的气象数据,故选择4.0版本
cd Build_WRF
老规矩,把包下载好直接放到Build_WRF文件夹里面,就不wget了
tar -zxvf WRFV4.0.TAR.gz
cd WRF/
./configure
Please select from among the following Linux x86_64 options:
1. (serial) 2. (smpar) 3. (dmpar) 4. (dm+sm) PGI (pgf90/gcc)
5. (serial) 6. (smpar) 7. (dmpar) 8. (dm+sm) PGI (pgf90/pgcc): SGI MPT
9. (serial) 10. (smpar) 11. (dmpar) 12. (dm+sm) PGI (pgf90/gcc): PGI accelerator
13. (serial) 14. (smpar) 15. (dmpar) 16. (dm+sm) INTEL (ifort/icc)
17. (dm+sm) INTEL (ifort/icc): Xeon Phi (MIC architecture)
18. (serial) 19. (smpar) 20. (dmpar) 21. (dm+sm) INTEL (ifort/icc): Xeon (SNB with AVX mods)
22. (serial) 23. (smpar) 24. (dmpar) 25. (dm+sm) INTEL (ifort/icc): SGI MPT
26. (serial) 27. (smpar) 28. (dmpar) 29. (dm+sm) INTEL (ifort/icc): IBM POE
30. (serial) 31. (dmpar) PATHSCALE (pathf90/pathcc)
32. (serial) 33. (smpar) 34. (dmpar) 35. (dm+sm) GNU (gfortran/gcc)
36. (serial) 37. (smpar) 38. (dmpar) 39. (dm+sm) IBM (xlf90_r/cc_r)
40. (serial) 41. (smpar) 42. (dmpar) 43. (dm+sm) PGI (ftn/gcc): Cray XC CLE
44. (serial) 45. (smpar) 46. (dmpar) 47. (dm+sm) CRAY CCE (ftn/cc): Cray XE and XC
48. (serial) 49. (smpar) 50. (dmpar) 51. (dm+sm) INTEL (ftn/icc): Cray XC
52. (serial) 53. (smpar) 54. (dmpar) 55. (dm+sm) PGI (pgf90/pgcc)
56. (serial) 57. (smpar) 58. (dmpar) 59. (dm+sm) PGI (pgf90/gcc): -f90=pgf90
60. (serial) 61. (smpar) 62. (dmpar) 63. (dm+sm) PGI (pgf90/pgcc): -f90=pgf90
64. (serial) 65. (smpar) 66. (dmpar) 67. (dm+sm) INTEL (ifort/icc): HSW/BDW
68. (serial) 69. (smpar) 70. (dmpar) 71. (dm+sm) INTEL (ifort/icc): KNL MIC
Enter selection [1-71] :
关于每个选项https://xg1990.com/blog/archives/190介绍:
每一个选项的前半部分通常是在描述编译器与运行环境,根据计算机实际情况选择即可。后半部分是并行选项:
serial 表示串行计算;
smpar 表示内存共享并行计算(shared memory option),即使用openMP,大部分多核电脑都支持这项功能;
dmpar 表示分布式并行计算(distributed memory option),即使用MPI 进行并行计算,主要用在计算集群,单个电脑就没必要用了;
dm+sm 表示同时使用openMP与MPI两种并行方式. 根据实际需要选择即可,最保险的方法就是选择 serial,不过这样编译出来的程序运行最慢
其中serial是单线程运算,运算速度慢,但是编译简单,一般不用,初次接触可尝试。
dmpar是分布内存式并行,相当于多台电脑组成cluster,每个cpu都有自己的内存。
smpar是共享内存式并行,相当于服务器中内存和cpu不在一起。
自己具体情况咨询服务器管理员,如果是自己的PC,选smpar。
选择完编译选项后,会出现提示选择嵌套选项,一般就选 basic 选项即可:1
Enter selection [1-71] : 14
------------------------------------------------------------------------
Compile for nesting? (1=basic, 2=preset moves, 3=vortex following) [default 1]: 1
Configuration successful!
这里记得改一下WRF/Makefile 将-openmp都改为-qopenmp
./compile em_real >& compile.log &
tail -f compile.log
这里编译真的超级慢,想干啥就去吧,别等这家伙!估计得20分钟
最后编译生成
main/ndown.exe
main/real.exe
main/tc.exe
main/wrf.exe
四个文件
Once the compilation completes, to check whether it was sussessful, you need to look for executables in the WRF/main directory.
These executables are linked to 2 different directories. You can choose to run WRF from either directory.
2.Building WPS
After the WRF model is built, the next step is building the WPS program (if you plan to run real case, as opposed to idealized cases).The WRF model MUST be properly built prior to trying to build the WPS programs.
If you do not already have the WPS source code, move to your Build_WRF directory, downloadthat file and unpack it. Then go into the WPS directory and make sure the WPS directory is clean.
The tar file can be found at:
http://www2.mmm.ucar.edu/wrf/src/WPSV4.0.TAR.gz
这里还是用4.0版本,强迫症要求必须要与WRF版本一样!!!(p.s.其实duck不必)
下载好放到Build_WRF文件夹里面
cd Build_WRF/
tar -zxvf WPSV4.0.TAR.gz
cd WPS/
./clean
./configure
Please select from among the following supported platforms.
1. Linux x86_64, gfortran (serial)
2. Linux x86_64, gfortran (serial_NO_GRIB2)
3. Linux x86_64, gfortran (dmpar)
4. Linux x86_64, gfortran (dmpar_NO_GRIB2)
5. Linux x86_64, PGI compiler (serial)
6. Linux x86_64, PGI compiler (serial_NO_GRIB2)
7. Linux x86_64, PGI compiler (dmpar)
8. Linux x86_64, PGI compiler (dmpar_NO_GRIB2)
9. Linux x86_64, PGI compiler, SGI MPT (serial)
10. Linux x86_64, PGI compiler, SGI MPT (serial_NO_GRIB2)
11. Linux x86_64, PGI compiler, SGI MPT (dmpar)
12. Linux x86_64, PGI compiler, SGI MPT (dmpar_NO_GRIB2)
13. Linux x86_64, IA64 and Opteron (serial)
14. Linux x86_64, IA64 and Opteron (serial_NO_GRIB2)
15. Linux x86_64, IA64 and Opteron (dmpar)
16. Linux x86_64, IA64 and Opteron (dmpar_NO_GRIB2)
17. Linux x86_64, Intel compiler (serial)
18. Linux x86_64, Intel compiler (serial_NO_GRIB2)
19. Linux x86_64, Intel compiler (dmpar)
20. Linux x86_64, Intel compiler (dmpar_NO_GRIB2)
21. Linux x86_64, Intel compiler, SGI MPT (serial)
22. Linux x86_64, Intel compiler, SGI MPT (serial_NO_GRIB2)
23. Linux x86_64, Intel compiler, SGI MPT (dmpar)
24. Linux x86_64, Intel compiler, SGI MPT (dmpar_NO_GRIB2)
25. Linux x86_64, Intel compiler, IBM POE (serial)
26. Linux x86_64, Intel compiler, IBM POE (serial_NO_GRIB2)
27. Linux x86_64, Intel compiler, IBM POE (dmpar)
28. Linux x86_64, Intel compiler, IBM POE (dmpar_NO_GRIB2)
29. Linux x86_64 g95 compiler (serial)
30. Linux x86_64 g95 compiler (serial_NO_GRIB2)
31. Linux x86_64 g95 compiler (dmpar)
32. Linux x86_64 g95 compiler (dmpar_NO_GRIB2)
33. Cray XE/XC CLE/Linux x86_64, Cray compiler (serial)
34. Cray XE/XC CLE/Linux x86_64, Cray compiler (serial_NO_GRIB2)
35. Cray XE/XC CLE/Linux x86_64, Cray compiler (dmpar)
36. Cray XE/XC CLE/Linux x86_64, Cray compiler (dmpar_NO_GRIB2)
37. Cray XC CLE/Linux x86_64, Intel compiler (serial)
38. Cray XC CLE/Linux x86_64, Intel compiler (serial_NO_GRIB2)
39. Cray XC CLE/Linux x86_64, Intel compiler (dmpar)
40. Cray XC CLE/Linux x86_64, Intel compiler (dmpar_NO_GRIB2)
Enter selection [1-40] :
You should be given a list of various options for compiler types,whether to compile in serial or parallel,and whether to compile ungrib with GRIB2 capability.Unless you plan to creat extremely large domains,it is recommended to compile WPS in serial mode, regardless of whether you compiled WRF inparallel.
It is also recommended that you choose a GRIB2 option (make sure you do not choose one that states NO_GRIB2).You may choose a non-grib2 option, but most data is now in grib2 format, so it is best to choose this option.You can still run grib1 data when you have built with grib2.
Choose the option that lists a compiler to match what you used to compile WRF,serial,and grib2.
Note: The option number will likely be different than the number you chose to compile WRF.
因为要用新数据,最好不要选**_NO_GRIB2
注意:选项号可能与您选择的WRF编译号不同。
Enter selection [1-40] : 21
------------------------------------------------------------------------
Configuration successful. To build the WPS, type: compile
------------------------------------------------------------------------
Testing for NetCDF, C and Fortran compiler
This installation NetCDF is 64-bit
C compiler is 64-bit
Fortran compiler is 64-bit
The metgrid.exe and geogrid.exe programs rely on the configure.wps file that directs the WPS build system to the location of the I/O libraries from the WRF modle : WRF_DIR = …/WRF. 记住要确保WRF与WPS都在Build_WRF目录下
sudo nano configure.wps
WRF_LIB = -L$(WRF_DIR)/external/io_grib1 -lio_grib1 \
-L$(WRF_DIR)/external/io_grib_share -lio_grib_share \
-L$(WRF_DIR)/external/io_int -lwrfio_int \
-L$(WRF_DIR)/external/io_netcdf -lwrfio_nf \
-L$(NETCDF)/lib -lnetcdff -lnetcdf -qopenmp
You can now compile WPS. The compilation should take a few minutes. The ongoing compilatiion can be checked.
./compile >& compile.log &
[1] 19929
tail -f compile.log
#...运行一大堆截取最后一部分如下:
make[1]: Leaving directory '/home/pc/Build_WRF/WPS/util/src'
if [ -h int2nc.exe ] ; then \
/bin/rm -f int2nc.exe ; \
fi ; \
if [ -h ../int2nc.exe ] ; then \
/bin/rm -f ../int2nc.exe ; \
fi ; \
if [ -e src/int2nc.exe ] ; then \
ln -sf src/int2nc.exe . ; \
fi
#运行到这里卡住,按Ctrl+C键结束进程:
^C
[1]+ Done ./compile &> compile.log
Once the compilation completes, to check whether it was successful, you need to look for 3 main executables in the WPS top-level directory.Then verify that they are not zero-sized.
ls -las *.exe
geogrid.exe -> geogrid/src/geogrid.exe
metgrid.exe -> metgrid/src/metgrid.exe
ungrib.exe -> ungrib/src/ungrib.exe
三个文件都出来,且不是红色