Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
Hi Keith, I'd like to correct a misunderstanding. You wrote: > ... I am very concerned that switched to CDF2 only gains us a factor > of 2. ... If NetCDF doesn't transition allow larger records, in the > near future we will either have to redesign our output and analysis > tools, which is time consuming, or we won't be able to use NetCDF. The CDF2 format (also known as the "64-bit offset" format) has no 4 GiB limit on record size. Records can be much larger than that, as the table in the Users Guide shows: http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/Large-File-Support.html In versions 3.6 and later, the theoretical limit on a record size is the same as the maximum file size, 8 EiB (about 9.22e+18 bytes). With more than one record in a file, that has to be divided by the number of records, to not exceed the maximum file size. However, no record variable can require more than 4 GiB bytes of storage for each record's worth of data, unless it is the last record variable. But you can have billions of record variables. _From what you've described, I don't think the CDF2 format limits will constrain your move to higher resolutions, assuming you use a different netCDF record variable for each 3D field. I'm sorry if the documentation seems to imply a 4 GiB record size limit, but it's actually a 4 GiB per variable record size limit. If you find you can't store very large records, that's a bug and we'd like to hear about it. --Russ ============================================================================== To unsubscribe netcdfgroup, visit: http://www.unidata.ucar.edu/mailing-list-delete-form.html ==============================================================================
netcdfgroup
archives: