[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[netCDF #PEB-847323]: Re: [netcdf-hdf] NF90_GET_VAR memory leak?



> Hi, Russ,
> 
> The files are all opened at the beginning of the main program and
> remain open until the end.  The program stitches the data variables
> across the files and puts the combined data into one big file.  It
> would be possible to open one file at a time and write its data to the
> large file, but we have reasons for wanting to have each variable
> complete in memory (say, for lossy 16bit scaling for greater
> compression).
> 
> If chunking is the issue, then perhaps a new chunk cache is being
> allocated for each variable? I see the memory go up as each variable
> is read, so perhaps the old chunk cache is not being deallocated? I
> have tried closing and reopening all the files after each variable is
> read, but that did not help (and takes significant time), and suggests
> that the caches are not deallocated when a file is closed.  Is there
> some way to 'clean up' after finishing with a particular variable?  I
> could also experiment with writing out uncompressed files (classic or
> just with deflate=0), for which presumably no chunk cache would need
> to be allocated?
> 
> The files I'm working with were created with version 4.0, if that
> makes a difference -- I haven't yet upgraded on the system where I'm
> generating the data.  I can upgrade it, of course.
> 
> ncdump output is after the quoted message.  The cache is the size of
> the 3d array (for one time).
> 
> -- Ted
> 
> 

Howdy Ted!

Can I suggest that you upgrade netCDF to the current snapshot release:

ftp://ftp.unidata.ucar.edu/pub/netcdf/snapshot/netcdf-4-daily.tar.gz

Within the last month there have been a lot of code changes to fix memory 
leaks. And you should update the machine which is writing the files as well, 
since the default chunk sizes have been changed to give better performance.

As Russ points out, each open HDF5 file maintains a cache in memory. The size 
of the chunk cache (in bytes) can be set with the nc_set_chunk_cache call. You 
could try setting it to a smaller value than the current default, 32000000.

You should also go to the HDF5 site and get their latest snapshot release. They 
have fixed several memory issues since the 1.8.3 release.

Please let me know if this doesn't help.

Thanks,

Ed


Ticket Details
===================
Ticket ID: PEB-847323
Department: Support netCDF
Priority: Critical
Status: Closed