netCDF-4 implementation question - whether or not to close HDF5

NOTE: The netcdf-hdf mailing list is no longer active. The list archives are made available for historical reasons.

Howdy all!

I'm just refactoring some code today, and I note all the code I have
devoted to releasing HDF5 typeids, fileids, groupids, and other ids,
on error conditions.

This is really all unnecessary work, if I just open the file with the
proper access property list (setting H5F_CLOSE_STRONG), I can cause
the HDF5 library to unwind its own open objects.

Do we all think it's OK to do that? Is there any reason from the HDF5
side not to do it? Because it would make my code simpler if I can
count on HDF5 to close everything down, instead of keeping track of it
myself.

Perhaps an example will illustrate. If a user is reading a
netCDF-4/HDF5 file with the nc_get_var() function, some H5Dread
call(s) take place behind the scenes. If one of these calls fails
(perhaps the file is corrupt), I (of course) return failure, but also
release all HDF5 objects that I've opened/created in order to read
that dataset (typeid, spaceid, etc.)

Instead, I could leave all these hanging until the user actually
closed the file, and then rely on HDF5 to clean them all up. This
would be helpful because, in general, figuring out all possible things
that can go wrong, and recovering resources is non-trivial.

On the down-side, any of these hanging resources will accumulate until
the user closes the file. I think that's OK. Anyone
agree/disagree/even care?

When finally closing the file, I work through all my open HDF5 ids and
close them. But that's just a waste of code too, because why not let
HDF5 do that work?

Any thoughts would be welcome.

Thanks!

Ed
-- 
Ed Hartnett  -- ed@xxxxxxxxxxxxxxxx


  • 2005 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdf-hdf archives: