Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
Hi Chris, > Will the performance of access to NetCDF degrades if I set every dimension > unlimited? Short answer: yes. Explanation: when you make a dimension unlimited, variables that use that dimension use chunked storage rather than contiguous storage, which incurs some storage overhead for B-tree indexing of the resulting chunks and for partially written chunks. Also, variables that use unlimited dimensions use 1, by default, for the chunk length along unlimited dimension axes. That's a reasonable default for multidimensional variables that have only one unlimited dimension, but if all dimensions are unlimited, the default would make chunk sizes for all variables only big enough to hold 1 value, which would be a very inefficient use of chunking. If you specify chunk lengths explicitly for each variable, and if you intend to append an unknown amount of data along every dimension, it may make sense to set every dimension unlimited. Otherwise, it would be better to only declare a few dimensions unlimited, those along which data will be appended. --Russ
netcdfgroup
archives: