Hi,
Our THREDDS server (http://thredds.atmos.albany.edu:8080/thredds , still
running 4.6.13 at this time) serves both a current-week and longer term archive
of GEMPAK-formatted METAR files as Feature Collections. Very nicely, THREDDS
invokes netcdf-java to handle the conversion of GEMPAK to NetCDF. The archive
is accessed especially frequently at this time of the year, when my
co-instructor and I have the students do a case study of their choice and use
MetPy and Siphon to access, subset, and display surface maps and meteograms for
their event of interest.
Typically, I soon run into issues where the THREDDS server fails with 500
server errors when an arbitrary GEMPAK surface file gets accessed via NCSS. I
have traced this to our NCSS and Random Access caches having max values set too
low.
I see messages in the content/thredds/logs/cache.log file that look like this:
[2020-05-06T00:25:01.089+0000] FileCache NetcdfFileCache cleanup couldnt
remove enough to keep under the maximum= 150 due to locked files; currently at
= 905
[2020-05-06T00:25:44.105+0000] FileCache RandomAccessFile cleanup couldnt
remove enough to keep under the maximum= 500 due to locked files; currently at
= 905
No prob, I have upped these limits now. But those "locked files" references
made me do some poking around on the machine which is running THREDDS. I notice
that when I run the lsof command and grep for one of the GEMPAK files that has
been accessed, I see a really large # of matches.
For example, just now I picked one particular file, ran my Jupyter notebook on
it that queries and returns the subsetted data via Siphon, and then ran lsof
and grepped specifically for that one file.
Not surprisingly, it was listed in the lsof output. But surprisingly, lsof had
it listed 89 times! Why might that be the case?
Multiply this by a dozen or so students and co-instructors, and 1-4 individual
GEMPAK files per case, and now I'm seeing why I consistently run into issues,
particularly with these types of datasets. Once the notebook instance is
closed, the open files disappear from lsof, but often times students (and even
I) forget to close and halt their Jupyter notebooks.
Curiously, when I look into my content/thredds/cache/ncss directory, I don't
see anything.
So my two questions are:
1. Why does lsof return such a large number of duplicate references for a
single file that's being accessed via NCSS?
2. Why do I not see files appear in the cache directory, even when there are
clearly instances when the cache scouring script detects them?
Thanks,
Kevin
_____________________________________________
Kevin Tyle, M.S.; Manager of Departmental Computing
NSF XSEDE Campus Champion
Dept. of Atmospheric & Environmental Sciences
University at Albany
Earth Science 228, 1400 Washington Avenue
Albany, NY 12222
Email: ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>
Phone: 518-442-4578
_____________________________________________