Re: large file support

Hi Chris:

Sounds like a bug!
Any way tou can put your data on an ftp or http server so I can recreate the 


Christopher.Moore@xxxxxxxx wrote:

Hi John & others,

A lot of us are creating large netcdf files these days, and I was wondering if anyone is having the same kind of problems I am:

When I open a large (~2Gb) netcdf file using the latest stable release, I've been getting an error. The code was working when the file size was smaller (fewer time steps, smaller lat/lon resolution).

My data is simply
        LON = 1276 ;
        LAT = 950 ;
        TIME = UNLIMITED ; // (601 currently)
        double LON(LON) ;
                LON:units = "degrees_east" ;
                LON:point_spacing = "even" ;
        double LAT(LAT) ;
                LAT:units = "degrees_north" ;
                LAT:point_spacing = "uneven" ;
        double TIME(TIME) ;
                TIME:units = "SECONDS" ;
        float HA(TIME, LAT, LON) ;

I start by opening the file, getting the time dimension & variable. It croaks when reading the variable into an Array (note that I'm only reading the TIME axis array, fairly small):

            Variable testVar = timeDim.getCoordinateVariable();
            System.out.println("testVar.getName: " + testVar.getName());
System.out.println("testVar.getDataType: " + testVar.getDataType());
            System.out.println("testVar.getRank: " + testVar.getRank());
            int[] testShape = testVar.getShape();
            for (int q=0; q<testShape.length; q++)
                System.out.println("testShape["+q+"]: " + testShape[q]);
            int[] testOrigin = new int[testVar.getRank()];
            Array testArr2 =,testShape);

error: Negative seek offset Negative seek offset
        at Method)
        at ucar.netcdf.RandomAccessFile.read_(
at ucar.netcdf.NetcdfFile$V1DoubleIo.readArray(
        at ucar.netcdf.NetcdfFile$V1Io.copyout(
        at ucar.netcdf.Variable.copyout(

I immediately thought it was a Large File Support problem, but a little investigation turned up the fact that I don't (yet) need 64-bit addressing because my record sizes are small. In fact, I can read the file fine with applications that use the C and FORTRAN netcdf libraries (version 3.6 or higher). And ncdump works.

But I went ahead and created the large file with 64-bit addressing anyway by simply changing one line in my FORTRAN model from

        iret = nf_create(ncfn, NF_CLOBBER, ncid)

        iret = nf_create(ncfn, OR(NF_CLOBBER, NF_64BIT_OFFSET), ncid)

It creates the file just fine, checking with:

      od -An -c -N4
gives the expected
      C    D    F 002

and ncdump gives

     netcdf s_2903-563_ha64 { // format variant: 64bit
     dimensions: ...

but my java code still croaks (in the same way Ferret does if not compiled against netcdf lib > 3.6):

Exception in thread "AWT-EventQueue-0" java.lang.IllegalArgumentException: Not a netcdf file
        at ucar.netcdf.NetcdfFile.readV1(
        at ucar.netcdf.NetcdfFile.<init>(
        at ucar.netcdf.NetcdfFile.<init>(
        at ucar.nc2.NetcdfFile.<init>(

Anybody else out there creating java applications that read large files?

I'm running RHEL4 on an EMT64 x86_64 machine, with java 1.5.0_06 and java netcdf version 2.2.16


Christopher W. Moore             email: Christopher.Moore@xxxxxxxx
Research Scientist, Oceanography                 tel: 206.526.6779
University of Washington/JISAO/NOAA-PMEL         fax: 206.526.6744
NOAA Center for Tsunami Research                       Seattle, WA

To unsubscribe netcdf-java, visit:

To unsubscribe netcdf-java, visit:

  • 2006 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdf-java archives: