Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
It isn¹t usually a constant file size multiplier, it is typically an offset in file size. In other words, typically the file will be somewhat larger for the hdf5-based version (with no compression) than a corresponding netcdf-based classic file, but the size differential will be smaller as the overall file sizes increase. Note that with the netcdf4 hdf5-based file, you can also enable compression with the "-d #² and -s options. Œ#¹ can rance from 1 to 9, but values of 1 or 2 along with the shuffle (-s) option typically give good results. ..Greg On 2/4/15, 3:16 PM, "Nico Schlömer" <nico.schloemer@xxxxxxxxx> wrote: >Hi all, > >When converting classical netCDF files to their modern format (compare >the thread starting at [1]) I noticed that the file size blows up >considerably, e.g. >``` >$ du -sh pacman-classical.e >40K pacman-classical.e >$ nccopy -k hdf5 pacman-classical.e pacman.e >$ du -sh pacman.e >4.1M pacman.e >``` >with `pacman-classical.e` from [2]. I'm not too worried about this >now, but is this something you would expect? > >Cheers, >Nico > > >[1] >http://www.unidata.ucar.edu/mailing_lists/archives/netcdfgroup/2015/msg000 >19.html >[2] http://win.ua.ac.be/~nschloe/other/pacman.e > >_______________________________________________ >netcdfgroup mailing list >netcdfgroup@xxxxxxxxxxxxxxxx >For list information or to unsubscribe, visit: >http://www.unidata.ucar.edu/mailing_lists/
netcdfgroup
archives: