[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[netCDF #FFY-177157]: NetCDF-4 and 64 bit dimensions



Rob

> I think I get it: For files in the CDF-2 format, the netcdf3 api
> matches what is on disk: since no dimension can be larger than 4 GB,
> there is no problem using size_t for the start[] array.
> 
> If the file is created with the NC_NETCDF4 flag, 32 bit codes are
> limited by the API in what they can describe: a 4-byte size_t will
> make it impossible to define a variable with a dimension greater than
> 4GB.
> 
> The only sticky area is when a 64 bit machine (AMD opteron, intel
> core2) creates a NC_NETCDF4 file with big dimensions, then sends that
> file to a collaborator with a 32 bit machine. In that scenario, you
> just give an error, essentially saying "try again on a 64 bit
> machine", right?

That's what should happen, but since we don't test that scenario, I
had to try it.  

It appears you've identified a bug, because when I try that, what gets
stored as the size of a big dimension (with a size of 4500000000 that
requires more than 32 bits) is silently truncated to 32 bits (it gets
stored as 205032704), so it looks like the 64-bit library doesn't even
store big dimensions correctly, at least on Solaris 10 xeon i86pc with
Sun's cc.  Then of course a 32-bit platform has no problem with that!

I'll have to investigate further to make sure we're supposed to
support big dimensions when using 64-bit HDF5 and netCDF-4 libraries
on a 64-bit platform, in case maybe the bug is in my understanding ...

--Russ

Russ Rew                                         UCAR Unidata Program
address@hidden                     http://www.unidata.ucar.edu



Ticket Details
===================
Ticket ID: FFY-177157
Department: Support netCDF
Priority: Normal
Status: Closed