Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
On Tue, Apr 24, 2007 at 02:30:39PM -0600, Ed Hartnett wrote: > However, another question: have you considered using netCDF-4? It does > not have the limits of the 64-bit offset format, and support parallel > I/O, as well as a number of other features (groups, compound data > types) which might be helpful in organizing really large data sets. Hi Ed The CDF-1 and CDF-2 file formats appear to be quite robust in the face of client failures. Greg S at least has observed file corruption with the HDF5 file format during parallel I/O if a client dies at a particular time. As I understand it, it's hard to devise a solution for the HDF5 file format that is both rock-solid robust *and* delivers high-performance. In parallel-netcdf land we also like the CDF-1 and CDF-2 file formats: they are easy to work with from an MPI-IO perspective. Also, by writing out all the metadata in define mode, there is very little chance of file corruption for any failure in data mode. Thanks ==rob -- Rob Latham Mathematics and Computer Science Division A215 0178 EA2D B059 8CDF Argonne National Lab, IL USA B29D F333 664A 4280 315B ============================================================================== To unsubscribe netcdfgroup, visit: http://www.unidata.ucar.edu/mailing-list-delete-form.html ==============================================================================
netcdfgroup
archives: