netcdf-hdf mailing list is no longer active. The list archives are made available for historical reasons.
Well, I was thinking this whole thing through, and came up with a third option. It's a little dramatic, but it might be the best idea... We could create a netcdf directory under the hdf5 directory, and store all our netcdf code in your repository. Then we modify the HDF5 configure so that, with an --enable-netcdf option, it will build netcdf as part of HDF5. This has the following advantages: 1 - User only has to download one tarball. 2 - User only has to go through one configure/make process. 3 - Guaranteed to build netCDF with same compiler as HDF5. 4 - User only has to link to one library instead of two. This would also solve the h5cc problem, since I assume you don't use that in your own build, because you know at build time where all the HDF5 libraries and headers are. We could do this and still keep the main control of the code in Unidata's CVS server, using the CVS vendor feature to import it into the NCSA CVS server. Any thoughts?
This would be a bit of work to get it right. We'd have to add options to build HDF with and without netcdf4. If you want options to buildnetCDF3 only, we'd have to deal with that. If there are netcdf-4 specific
options, we'd have to deal with that.All do-able, but not explored yet.
The one big advantage of the current approach is that you can build netCDF4 using a binary installation of HDF5. This is not only less work, it lets people build netCDF on a system where HDF is installed by the sysadmins, using the same HDF as everybody on the system.Another advantage of the combined CVS tree would be that it would be easier
to include netcdf4 in our auto testing, it would just happen. This seems like a good idea for the final product. Do we want to tackle this now? Would it be better to wait until we're ready for production?