[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[netCDF #NKH-242500]: Open Large File Error



Hi Doug,

Sorry it has taken us so long to get to your question.

> I am trying to use netCDF to open a LARGE (2.6 GB) .nc hydrology file of a
> type that will be used in future analysis of potential Everglades restoration
> scenarios.  Basic problem is that when I try to open the file I get an
> "Error: arg list too long."

That error does message doesn't come from the netCDF library, it must be a
message from a lower level OS call or from the operating system.

> Detailed info:
> 
> 1.) I am using Visual Studio 2008
> 
> 2.) Given that the error code is a "7" implying a non netCDF error, I wrote a
> few lines of code to read in part of the larger file and write it out.  In
> this way I created a 1.9 gb file and a 2.3 gb file of the same data.  I can
> open the 1.9 gb file in netCDF but get the error from the 2.3 gb file.  Given
> that I did this in the same VS project as the netCDF code, it is quite clear
> that I can read and write 2+ gb files on this machine in VS 2008, and
> therefore the error most probably is occuring in netCDF or in the
> comunication between netCDF and the system.

There are lots of reasons why you might not be able to write a large netCDF
file (see the answers to the netCDF FAQ

  "Why do I get an error message when I try to create a file larger than 2 GiB
   with the new library?"
   
http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#Large%20File%20Support12

but I don't understand why you would get an error opening such a file, unless
you are using a version of netCDF before version 3.6.1, and then the error 
should just be a netCDF error "Not a netCDF file".

> 3.) I am using NC_64BIT_OFFSET as part of the nc_open statement.
> "retval=nc_open("Glades.nc",NC_64BIT_offset,&ncid);"

Using the NC_64BIT_offset flag to an nc_open call is wrong, that flag is only
appropriate for an nc_create call.  When you are creating a new netCDF file in
some non-default variant of the format, such as the 64-bit offset format, you
need to specify the format, but the only flags allowed (and documented) for
use in an nc_open call are NC_NOWRITE, NC_WRITE, and NC_SHARE.  On opening
a netCDF file, the library automatically determines which variant of the netCDF
format is used for the file.

With the current snapshot version of netCDF, specifying NC_64BIT_OFFSET to 
nc_open() results in the error message "Invalid argument".

In older versions of the library, use of that flag to nc_open is not detected
as an error, and nc_open just proceeds as if a 0 (the same as NC_NOWRITE)
argument had been given.  So using that argument is not resulting in the
"argument list too long" message.
 
> 4.) Just to test the netCDF binaries I was using (3.6.1) I downloaded and
> built 4.0.1 and tried those instead.  Same problem.
> 
> 5.) I thought I would try ncdump on the file, but ncdump seems to have it's
> own problem.
> "cannot find procedure entry point nc_inq_enum_member in netcdf.dll".  Could
> this be associated with the error?

It sounds like the ncdump you are using was the one built in the 4.0.1
release, but the netcdf.dll libraries it is using are the ones from the
netCDF-3.6.1 relelase.  An ncdump from the 3.6.1 release should work fine
with large files (over 4 GBytes), as I just verified.

> So, best guess is that this is an issue with large files and netcdf.
> However, this is an issue with opening as opposed to writing large files,
> which is something I didn't find in any other error reports here.

I've never seen that either, that I can remember.  To run a test of netCDF 
large file support on a Unix or cygwin system, it's only necessary to
configure it with "--enable-large-file-tests", which creates and reads 13 GB
files, but I don't know how to run these tests from a Windows installation 
built using Visual Studio 2008.

> Any help would be greatly appreciated.

If you can make one of the big files available to us, we can try reading it
here or running ncdump on it, to make sure it's not corrupted somehow ...

--Russ

Russ Rew                                         UCAR Unidata Program
address@hidden                     http://www.unidata.ucar.edu



Ticket Details
===================
Ticket ID: NKH-242500
Department: Support netCDF
Priority: Normal
Status: Closed