Re: questions about compression...

NOTE: The netcdf-hdf mailing list is no longer active. The list archives are made available for historical reasons.

Ed Hartnett wrote:

"Robert E. McGrath" <mcgrath@xxxxxxxxxxxxx> writes:

Please check the Users Guide (chapter on 'datasets').

http://hdf.ncsa.uiuc.edu/HDF5/doc/UG/


Basically, there is a set/get pair for all the filters. The standard
filters are:  Deflate (GZIP), SZIP compression, Shuffle, and Fletcher
Error
Detection Code.

To enable, you do a H5Pset_... on the Dataset Creation Property list,
then
create the dataset with H5Dcreate.

OK, then let me pose the following requirements' question:

Is the requirement that we support one type of compression, both types
of compression that currently exist in the library (gzip and szip), or
that we support all compression filters that may be introduced in the
future?

Or is the requirement that we support file filters, including all the
ones listed above?

If yes to the last question, is it also a requirement that we allow
the user to register callbacks, etc., and so add his own filters to
netCDF-4, just as HDF5 does?

Ed
IMNSHO, we should disallow user-defined file filters, since they make files non-portable (if i understand them correctly). If they need that, they should use HDF5. The compression filters should be limited to ones we can read in Java.

  • 2004 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdf-hdf archives: