Hi Nick, > I am running some tests with a code I am converting from using a flat > file to netcdf/hdf5. I am using the parallel MPIIO access mode so unable > to use the deflation calls via the API. I thought I would use nccopy > -d9 as a post process on my files to compress them and therefore get > some space saving whilst still retaining the ability to do a parallel > read in other related codes. > > However, I find that I get quite poor compression using nccopy, much > worse than I get if I use the API call. In some cases, nccopy -d9 gives > little or no compression whilst using the API gives me 4-5x compression. > > Is this something you would expect or am I missing something critical > in this case? No, you should expect exactly the same compression using nccopy as with the API calls. nccopy calls the API for each variable in the file with whatever compression level you specify. The API calls are somewhat more flexible, in that you can specify a differnt level of compression (or no compression) for each variable separately, but if you use the same compression for every variable, there should be no differencce. If you are seeing something different, it sounds like a bug. Can you provide a sample file that we could use to reproduce the problem and diagnose the cause? --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: PBW-682100 Department: Support netCDF Priority: Normal Status: Closed
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.