Re: [thredds] "Too Many Open Files" Error. Dataset too big?

John et al,

> Aggregations are not supposed to leave files open. If they are, thats a bug.
> The number of open files should typically be less than 1000, no matter how 
> big your aggregations are. It is dependent on the file cache sizes, see:

We set our NetcdfFileCache to match what we set our ulimit max # of
open files to (1mil). The hope of this is that as many files as
possible are cached in memory by the server, for quicker catalog
scans/requests. We never even approach 1mil concurrently open files.

   <NetcdfFileCache>
    <minFiles>1000</minFiles>
    <maxFiles>1000000</maxFiles>
    <scour>24 hours</scour>
  </NetcdfFileCache>


It was also my understanding that ALL of the files in a particular
dataset scan need to be touched/opened/checked before being folded
into an aggregation (this is upon initial scan for saving into the
cache/agg folder). After this initial indexing, the average # of open
files will drop off since queries are only picking/choosing a few
files from this catalog, but you still need to have all the proper
system limits bumped up in order for this initial scan/cache to ever
successfully complete in a timely fashion. correct/incorrect?

/mike



  • 2013 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the thredds archives: