Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
Ed, On Oct 6, 2009, at 11:47 PM, Ed Hartnett wrote:
As I understand it, the reason this problem comes up is because some users have parallel programs which use sequential netCDF. So the user program contains the include for mpi.h, but sequential netCDF has redefined MPI_Comm and MPI_Info.
This matches my understanding.
I think the smart thing to do might be to have a separate header netcdf_par.h, which is included by people who want to use parallelI/O. This way, the sequential header (netcdf.h) does not need to containanything related to mpi.
I would extend this to 1) having a separate (private) header netcdf_par.h that includes mpi.h2) making the configure script put "#include <netcdf_par.h>" in netcdf.h if HDF5 is compiled with MPI.
3) only building NetCDF parallel I/O if HDF5 is compiled with MPI.This way all users include netcdf.h, but its parallel features are available only if they can actually be used.
Just an idea.
Right now, I also build nc_create_par/nc_open_par functions whether thebuild is for parallel or not. I think I will have to change that too.
I will try and get this change in before the 4.1 release, but I am on travel right now...
Thank you! -- Constantine
netcdfgroup
archives: