[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 2 GB netCDF limit



> From: address@hidden (John Sheldon)
> Subject: Re: 2 GB netCDF limit
> To: address@hidden (Russ Rew)
> Date: Mon, 16 Jun 1997 17:09:12 -0400 (EDT)
> Cc: address@hidden (Hans Vahlenkamp),
>         address@hidden (Stephen Griffies),
>         address@hidden (Ron Pacanowski)

John,

> > > On another topic...many people here are starting to *panic* about the 2
> > > GB file limit in netCDF.  I know you are planning to remove that limit
> > > with netCDF 4, but how far off is that?  After convincing everyone here
> > > to convert to netCDF, I am naturally feeling a bit of panic, too :-) !
> > 
> > Permitting bigger files requires a file format change (64-bit offsets
> > instead of 32-bit offsets).  We don't want to have to change the netCDF
> > file format more than once, if we can help it.  So the file changes
> > required for netCDF 4 will anticipate everything we can think of.  It
> > may be more than a year away.  I'm afraid I can't give you anything more
> > definite now ...
> 
> Well, we've spent quite an interesting afternoon here trying to figure
> out just what happened to a user's "extra-large" netCDF file.
> Basically, he concatenated 2 files, each ~1.2GB, on our Cray T90 using
> "nccatm", a local extension (for multiple files) of Chuck Denham's
> "nccat" program.  Panic initially set in when the resulting file did
> not appear readable, based on attempts to ncdump it from our SGI
> workstations.  However, we soon found that we *could* ncdump it on the
> Cray!  Encouraged, we split the 2.4GB file into two parts using
> Charlie Zender's "ncrcat" utility, and both pieces were readable from
> our SGI's (we viewed them using "ncview").
> 
> Not believing our good fortune, we repeated this user's steps with some
> new files, and checked the status code returned by "nccatm" : 0!  With
> the wind apparently at our backs, we further checked the return code
> coming back from ncvarput while it was writing well beyond 2GB : 0!
> 
> We are continuing to concatenate as I write this (4.5GB and counting).
> 
> Can we infer from this that files larger than 2GB *are* possible on a
> Cray?  (Obviously, we'd like to find out that our seeming success is by
> design, not by accident.)  Despite an inability to access such files
> from our workstation, this would be very good news :-)

interesting.

Maybe the Cray is just computing huge file offsets in its 64 bit
registers and using those.  I'm very skeptical about how well it will
work in general.  The file still only has a 32-bit slot for a variable
offset, but if that's adequate for the beginning of a record variable
and it turns out to fit, it might work for that variable.  Here's
something that I don't think could possibly work:

netcdf big {
dimensions:
        nx=256, ny=256, nz=256, nw=256;
        rec = unlimited;
variables:
        byte b(nx,ny,nz,nw);
        float y(unlimited);
data:
        x = 1;
        y = 1;
}

because the offset for the first y value won't fit in the 32-bit slot
alloted to it.  On a Sun, when I try "ncgen -b big.cdl" on this file, it
immediately reports:

   ncgen: big.cdl line 9: too many values for this variable, 0 >= 0

(whoops, a bug, 0 >= 0 ?), but that message is generated by ncgen, not
the underlying library.  If I change nx=128, it takes a while, trying to
create a 2147475632 byte file, and ultimately gets:

   ncgen: Invalid argumentncgen: big.cdl line 9: out of memory

I suspect you'd get something similar on a Cray ...

--Russ