Hi Nicolai, > linking against openmpi-1.4.3, the parallel testsuite fails with this: > -------------------------------------------------------------------------- > [tornado1:31832] *** An error occurred in MPI_Comm_dup > [tornado1:31832] *** on communicator MPI_COMM_WORLD > [tornado1:31832] *** MPI_ERR_COMM: invalid communicator > [tornado1:31832] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) > -------------------------------------------------------------------------- > mpiexec has exited due to process rank 0 with PID 31832 on > node tornado1 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpiexec (as reported here). > -------------------------------------------------------------------------- > > See attached testcase. > > The reason is that you're calling MPI_Comm_f2c on C MPI communicators in > nc_create_par. MPI_Comm_f2c has a meaning only on Fortran MPI > communicators. > > Fix is attached. Testsuite and testcase complete with this fix applied. Thanks very much for diagnosing the problem and providing a fix! We will include your fix in the next release, after evaluating it. --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: UPZ-187627 Department: Support netCDF Priority: Urgent Status: Closed
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.