Re: [thredds] THREDDS/NCSS/Open Files

Hi Kevin-

Since I wrote the GEMPAK IOSP over 10 years ago, my guess is that it might be missing some dataset close method. The netCDF-java IOSP API changed in that time, so it might just be a matter of adding some close methods to the IOSP classes. Sean would know more. Glad to know that you still find that useful.


On 5/6/20 1:10 PM, Tyle, Kevin R wrote:

Our THREDDS server ( , still running 4.6.13 at this time) serves both a current-week and longer term archive of GEMPAK-formatted METAR files as  Feature Collections. Very nicely, THREDDS invokes netcdf-java to handle the conversion of GEMPAK to NetCDF. The archive is accessed especially frequently at this time of the year, when my co-instructor and I have the students do a case study of their choice and use MetPy and Siphon to access, subset, and display surface maps and meteograms for their event of interest.

Typically, I soon run into issues where the THREDDS server fails with 500 server errors when an arbitrary GEMPAK surface file gets accessed via NCSS. I have traced this to our NCSS and Random Access caches having max values set too low.

I see messages in the content/thredds/logs/cache.log file that look like this:

[2020-05-06T00:25:01.089+0000] FileCache NetcdfFileCache  cleanup couldnt remove enough to keep under the maximum= 150 due to locked files; currently at = 905

[2020-05-06T00:25:44.105+0000] FileCache RandomAccessFile cleanup couldnt remove enough to keep under the maximum= 500 due to locked files; currently at = 905

No prob, I have upped these limits now. But those “locked files” references made me do some poking around on the machine which is running THREDDS. I notice that when I run the *lsof* command and grep for one of the GEMPAK files that has been accessed, I see a really large # of matches.

For example, just now I picked one particular file, ran my Jupyter notebook on it that queries and returns the subsetted data via Siphon, and then ran *lsof *and grepped specifically for that one file.

Not surprisingly, it was listed in the *lsof* output. But surprisingly, *lsof *had it listed 89 times! Why might that be the case?

Multiply this by a dozen or so students and co-instructors, and 1-4 individual GEMPAK files per case, and now I’m seeing why I consistently run into issues, particularly with these types of datasets. Once the notebook instance is closed, the open files disappear from *lsof*, but often times students (and even I) forget to close and halt their Jupyter notebooks.

Curiously, when I look into my content/thredds/cache/ncss directory, I don’t see anything.

So my two questions are:

 1. Why does *lsof* return such a large number of duplicate references
    for a single file that’s being accessed via NCSS?
 2. Why do I not see files appear in the *cache* directory, even when
    there are clearly instances when the cache scouring script detects them?




Kevin Tyle, M.S.; Manager of Departmental Computing

NSF XSEDE Campus Champion

Dept. of Atmospheric & Environmental Sciences

University at Albany

Earth Science 228, 1400 Washington Avenue

Albany, NY 12222

Email: ktyle@xxxxxxxxxx <mailto:ktyle@xxxxxxxxxx>

Phone: 518-442-4578


NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web.  Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.

thredds mailing list
For list information or to unsubscribe,  visit:

Don Murray

  • 2020 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the thredds archives: