Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
NOTE: The netcdf-hdf
mailing list is no longer active. The list archives are made available for historical reasons.
Hi Ed, > How does this code look? This is what I'm timing to get the baseline > answer: how fast can HDF5 write a file of longs? I added a couple of comments about the datatypes you use below, but otherwise the code below looks good. Quincey > Thanks! > > Ed > > #define NDIMS 3 > #define XLEN 2000 > #define YLEN 300 > #define ZLEN 500 > #define HDF5_FILE "/tmp/a1.h5" > #define VAR_NAME "V" > > hid_t hdfid, mem_spaceid, file_spaceid, datasetid, plistid; > hsize_t h5dim[] = {XLEN, YLEN, ZLEN}, h5count[] = {1, YLEN, ZLEN}; > hssize_t h5start[] = {0, 0, 0}; > hsize_t h5dimmax[] = {H5S_UNLIMITED, YLEN, ZLEN}, chunksize[NDIMS]; > int *data; > > /* Allocate memory for data and fill it with a phoney value. */ > { > size_t len = YLEN*ZLEN*sizeof(int); > if (!(data = (int *)malloc(len))) > BAIL(-2); > for (i=0; i<YLEN*ZLEN; i++) > data[i] = i; > } > > /* Create a HDF5 file, with an unlimited dimension, and write > the data that way. */ > { > /* Create the file and dataset. */ > if ((hdfid = H5Fcreate(HDF5_REC_FILE, H5F_ACC_TRUNC, > H5P_DEFAULT, H5P_DEFAULT)) < 0) > BAIL(-1); > h5dim[0] = 0; > h5dim[1] = YLEN; > h5dim[2] = ZLEN; > if ((file_spaceid = H5Screate_simple(NDIMS, h5dim, h5dimmax)) < 0) > BAIL(-3); > if ((plistid = H5Pcreate (H5P_DATASET_CREATE)) < 0) > BAIL(-10); > chunksize[0] = 1; > chunksize[1] = YLEN; > chunksize[2] = ZLEN; > if (H5Pset_chunk(plistid, NDIMS, chunksize) < 0) > BAIL(-11); > if ((datasetid = H5Dcreate(hdfid, VAR_NAME, H5T_STD_I32BE, > file_spaceid, plistid)) < 0) Although using 'H5T_STD_I32BE' is a very "apples-to-apples" comparison with nc3, it does mitigate one of the advantages of HDF5 - writing in the machine's native datatype without [potentially] forcing a conversion to big-endian data on disk. It would be more "HDF5-like" to use H5T_NATIVE_INT here. > BAIL(-4); > H5Sclose(file_spaceid); > H5Pclose(plistid); > > /* Now write the data. Use the same mem space for all > writes. This memspace is only big enough to hold one > record. */ > h5dim[0] = 1; > h5dim[1] = YLEN; > h5dim[2] = ZLEN; > if ((mem_spaceid = H5Screate_simple(NDIMS, h5dim, NULL)) < 0) > BAIL(-3); > for (h5start[0] = 0; h5start[0]<XLEN; h5start[0]++) > { > h5dim[0] = h5start[0] + 1; > if (H5Dextend(datasetid, h5dim) < 0) > BAIL(-3); > if ((file_spaceid = H5Dget_space(datasetid)) < 0) > BAIL(-3); > if (H5Sselect_hyperslab(file_spaceid, H5S_SELECT_SET, h5start, > NULL, h5count, NULL) < 0) > BAIL(-3); > if (H5Dwrite(datasetid, H5T_STD_I32BE, mem_spaceid, > file_spaceid, H5P_DEFAULT, data)) Also, the 'H5T_STD_I32BE' here is technically not a good idea because you are "lying" to the HDF5 library about the datatype in memory, using H5T_NATIVE_INT here is definitely correct. > BAIL(-5); > H5Sclose(file_spaceid); > } > > /* Clean up. */ > H5Sclose(mem_spaceid); > H5Dclose(datasetid); > H5Fclose(hdfid); > } >
netcdf-hdf
archives: