Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
Phil, Some people run lengthy processing scripts that trigger file recopying because, as you describe, they expand the metadata. The recopying can be expensive when the (netCDF3) files are large. A possible workaround is to pad the metadata header the first time the file is processed (or better yet, created). If you use an NCO operator that deals mainly with metadata (ncrename, ncatted, and ncks), then, when it is known that some metadata will be added at a later time, you can invoke the --hdr_pad option, whose argument is the amount in bytes of extra padding. http://nco.sf.net/nco.html#hdr This may save time (but not space :) later on. This is not a well-known option, so I thought it worth mentioning. cz -- Charlie Zender, Earth System Sci. & Computer Sci. University of California, Irvine 949-891-2429 )'(
netcdfgroup
archives: