Re: Design

> I do have some comments. Unfortunately my printer only printed the first 
> five pages so I appologize if some of these issues are covered after page 5.

Page 6 is a postscript image, think that could be the problem?

> 
> First and most importantly, I'd like to comment on the two requirements listed
> on the first page.
> 
> >"Firstly, that the resulting interface be call coompatible with the existing 
> >netCDF implementation; allowing all existing netCDF programs to continue 
> >functioning as expected."
> 
> Does this imply ALL that files writen using the HDF interface will be 
> compatible
> with programs using the netCDF interface? Seems improbable.

At the most basic level, HDF is simply a file format.  On top of that are a 
number
of interfaces (SDS, image, palette, etc...).  We are adding a netCDF interface; 
all
of the existing HDF interfaces will be retained.  The question about how (if at 
all)
the HDF image interfaces should be modified to conform more to the look-and-feel
of netCDF still needs to be addressed.

> 
> Will the following be the only change I'll have to make to my programs?:
>  
> From: cc *.c -L/usr/local/lib -lnetcdf
> To: cc *.c -L/usr/local/lib -lhdf

Yes.

> 
> Will netCDF files written using the unidata library be accessible to programs
> using the HDF-netCDF library and visversa? Seems impossible. This conversion 
> problem needs to be addressed.
> 
> Explicit information should be outlined on exactly what is expected when:
> A file written by unidata's is read by HDF.
> A file written by HDF is read by unidata's.

Files written using Unidata's XDR-based library will be readable by the
netCDF-HDF library (at least initially, we would like to eventually move to
a single on-disk representation.)   Files linked against the existing Unidata
netCDF library will not be able to read files written by the netCDF-HDF library.

> A file written by HDF is read by netCDF-HDF.
> A file written by netCDF-HDF is read by HDF.

As you quote below, files written via the netCDF interface will be HDF files.  
Right now an image and an SDS can reside in the same HDF file, but you can't
do much with the SDS with the image interface.  By the same token, a netCDF
object could also sit in the file, but you'd have to use the netCDF interface to
be able to do anything intelligent with it.  Eventually we will support 
embedding
images and such inside of netCDF objects, but as I mention below, just how 
those 
interfaces should look is still an open question.

> 
> >"And secondly, that the resulting files be true HDF files in that they will 
> >be recognizable and readable by existin HDF tools. It is to be expected, 
> >however that not all of the HDF tools will know how to intelligently process 
> >all of the information in the new files immediately."
> 
> It is surely the case that not all netCDF programs will be able to read 
> images in a 24-bit format or understand what to do with them. Some comments
> are needed on the subset of HDF data types that will be recognizable from 
> programs using only the netCDF calling interface.
> 

Yes, it is not clear just how the access to 24-bit images (say) will fit into
the new interface.  For the initial prototype we will only be supporting the
basic netCDF functionality.  One of the plans for this mailing list was to
address these exact questions.  

It has been remarked already that a 24-bit image could just be modeled as a 
variable.  This is true, but it also may be a case of technological overkill:  
requiring explicit dimension definitions and such.  Since a 24-bit image is 
a fairly specific type of thing, using something based on the HDF image 
interfaces would probably be easier to use, but then they don't really
have the "look-and-feel" of netCDF.  Some middle ground will probably need to be
found.

> Your overview of the data models is not an overview of the data models. Its
> a compare and contrast section. I suggest that you do not mention the other
> library in each of the overview and write another section comparing and 
> contrasting the two.

That section was mainly aimed at people who know about one model and nothing
about the other; I guess "overview" was not really the correct term...

> 
> Very little was mentioned about RANDOM ACCESS (Hyper-Slabs) IMHO this is the
> most important feature any storage format should. It is the only way that 
> EOS datasets will be accessed. This is one of the most important earth 
> observational satelites and the GIGABYTES are going to be flowing in less than
> two years. You claim that netCDF programs will have access to image data 
> through netCDF-HDF. Does this mean you plan to implement RANDOM ACCESS for 
> images? My brief scan of the HDF 3.1 doc doesn't reveal that this is 
> currently 
> possible unless the data is an SDS.

The random access to variable data provided in netCDF will be retained.  Again,
it is not clear how people would like images represented --- if random access to
some section of an image is required, it can be added.  [We have been expecting
that EOS will be storing their data in terms of variables which already have
hyperslab access.]

> 
> I wouldn't knock the XDR and transparency issue. Although slow people are 
> actually able to read files, written on a CRAY, on a MAC! What will the 
> behavior
> of netCDF-HDF be if these same people switch to netCDF-HDF? Data vendors are
> considering writing CD-ROMS in netCDF solely because they only have to stamp
> one kind of CD.

We were not knocking the transparency aspect of XDR.  Platform transparency
is also a fundamental aspect of HDF any HDF file written on any machine we
support can be read on any other (with the execption of numbers written in
"Cray native mode" for speed - but becasue of portability problems, we try to
discourage people from using this mode).   As we see it, the main drawback of
XDR is that to XDR an array of 100 floats requires 100 function calls to 
xdr_float() while HDF is able to optimize this process (translations to-from
the standard HDF storage format is vectorized on the Cray, for example).

> In what version of HDF are VGroups and VDatas discussed? What is the current
> version number? I have 3.1 and its dated July 90 surely there's been revisions
> since then.

Version 3.1 release 5 was release in the Fall of 91.  HDF 3.2 is currently in
Beta testing, we hope to have it out soon.  HDF 3.2 represents a fairly large
revision in the underlying structure of HDF and the documentation is currently
being rewritten.  The documentation for 3.1 Release 5 and Vsets can be 
downloaded
from ftp.ncsa.uiuc.edu 

HDF : /HDF/HDF3.1r5/doc
Vset : /HDF/HDFVset/HDFVset2.1/docs

>       -ethan alpert

-Chris Houck