[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: number of simultaneously open netCDFs



> Organization: NOAA/CDC
> Keywords: 199405171821.AA19248

Hi Don,

>       We expanded the per-process limit on the number of
> simultaneous netCDF opens locally by changing MAX_NC_OPEN in
> netcdf.h to the unix limit of 1024 and rebuilding the library.
> 
> I had application that needed on the order of 225 netCDFs open
> at once.  This was due to the fact that input was structured
> oddly:  the northern hemispheres of each field came first, followed
> by the southern hemispheres in a different order.  I ran into
> all kinds of weird problems.
> 
> Actually, the mail I sent you earlier about ncattinq not returning
> the string length I expected for NC_CHAR type attributes was part
> of these problems.  I was rather confused by my problems, as I'd
> written several netCDF applications prior to this (opening only one
> or two netCDFs, though) without these sorts of difficulties.
> 
> I finally gave up and wrote a program to integrate the hemispheres
> first, thus making it convenient for the main application to have
> only one netCDF open at a time.  Otherwise, I didn't change the
> netCDF aspects of the code.  The thing worked like a champ then.
> 
> I'll now finally get around to what I'm asking you.  :)  Since
> you folks at Unidata no doubt write a few netCDF applications,  ;)
> I was wondering what's the highest number of netCDFs one of
> your applications has had open at once, successfully.  I'm just
> curious if anyone else, to your knowledge, had tried something of
> this scale, and if there were some known problems with such
> applications.

We've never had more than a handful of netCDF files open simultaneously in
our applications, so I can't provide much help in the problems you
encountered by setting MAX_NC_OPEN to 1024.  All versions of UNIX have
their own limit on the number of files that can be simultaneously open in a
single process; if this limit was less than 1024 in your case, then I can
imagine things wouldn't work.

The limit on the number of simultaneously open files in a process is
supposed to be returned by a call to the C library function sysconf(), as in
the following C program:

#include <unistd.h>
#include <stdio.h>
int
main()
{
    printf("%ld\n", sysconf(_SC_OPEN_MAX));
    return 0;
}

On my Solaris 2.3 system, this prints "64", so it wouldn't work to set
MAX_NC_OPEN to anything larger than about 60 on my system (since you still
usually need some file descriptors for stdin, stdout, and stderr).

--Russ