[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[netCDF #ZCE-849683]: clarification on large file support



> I'm quite sorry,
> Actually I'm writing much more than what I stated in my previous email
> 
> I'm trying to dump on disk about 9GB. But It does not work and provides me
> the error I already said.
 ...
> I'm still trying to write a large file on a 64bit platform.
> 
> I fixed the variables type and now I get 8 bytes for both size_t and off_t
> system variables.
> 
> ... I get an error:  22
> ("invalis argument") from the nc_endded function.
> 
> What is the cause?

The netCDF error code for "Invalid Argument" is (-36), so the error
you are getting of 22 is from the operating system, in /usr/include/errno.h:

  #define       EINVAL  22      /* Invalid argument                     */

This may still be a netCDF library problem, but there are other reasons
related to your file system or operating system that may be the cause.  See
the answer to the FAQ "Why do I get an error message when I try to create a 
file larger than 2 GiB with the new library?":

  http://www.unidata.ucar.edu/netcdf/docs/faq.html#Large%20File%20Support12

Can you create large files with any other program on your system?  For example,
do you get an error from the operating system trying to write a 5GB file with a 
program that doesn't use the netCDF library, such as the following little C 
program?  If so, try compiling it with -D_FILE_OFFSET_BITS=64 flag.  If that
still doesn't work, you'll need to configure your file system for large file
support.

--Russ

/* 
 * See if we can write a 5GB file, to make sure large file system
 * supported.  Note: may need to compile this with
 * -D_FILE_OFFSET_BITS=64 flag on 32-bit systems to get an 8-byte
 * off_t type.
 */
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>

extern int errno;

#define MILLION 1000000
int
main() {
    int i, j, k;
    float data[MILLION];
    size_t written;
    FILE *f;

    f = fopen("bigfile", "w");
    if(!f) {
        printf("error opening file: %s\n", strerror(errno));
        return errno;
    }
    for(i=0; i < 1250; i++) {
        written = fwrite(data, sizeof(float), MILLION, f);
        if (written < MILLION) {
            printf("error writing, errno=%d, %s\n", errno, strerror(errno));
            return errno;
        }
    }
    return 0;
}


> -----Original Message-----
> From: Unidata netCDF Support [mailto:address@hidden]
> Sent: Friday, July 23, 2010 6:26 PM
> To: address@hidden
> Cc: address@hidden; address@hidden;
> address@hidden; address@hidden
> Subject: [netCDF #ZCE-849683]: clarification on large file support
> 
> > I did as you suggested me.
> >
> > But I get the following outcome:
> >
> > Size of Size_t = 8 ok
> > Size of Off_t = 4  !unbelievable
> >
> > How does it come?
> > We expect both of quantities to be equal to 8 bytes on a 64 bit
> > platform, don't they?
> 
> I'm surprised that the default size of off_t is only 4, but maybe you need
> to set a C macro to get an 8-byte off_t.  Try compiling and running this C
> program:
> 
> #define _FILE_OFFSET_BITS 64
> #include <sys/types.h>
> #include <stdio.h>
> int main(void) {
> printf("Size of off_t is %d bytes.\n", sizeof(off_t)); printf("Size of
> size_t is %d bytes.\n", sizeof(size_t)); return 0; }
> 
> If that shows sizeof(off_t) is now 8 bytes, then you can rebuild netCDF
> setting the CFLAGS environment variable, for example
> 
> CFLAGS='-D_FILE_OFFSET_BITS=64'
> 
> > Maybe some particular preprocessor have to be defined before compiling
> > the netcdf lib?
> > I can find the following preprocessors in my solution:
> > _WINDOWS;_USRDLL;NETCDF_EXPORTS;VERSION=3.6.1-beta1;DLL_EXPORT;VISUAL_
> > CPLUSP
> > LUS;_FILE_OFFSET_BITS=64
> >
> > I removed the NETCDF_DLL preprocessor since I need to create a static
> > library rather then a dynamic one.
> >
> > Can you give me some more suggestions to get the library work properly
> > with large file support?
> 
> I wish I knew more about the Windows compilation environment for 64-bit
> support.  We may get these problems solved in the next six months with our
> planned port to Windows that was recently announced on the netcdfgroup
> mailing list:
> 
> 
> http://www.unidata.ucar.edu/mailing_lists/archives/netcdfgroup/2010/msg00257
> .html
> 
> --Russ
> 
> > -----Original Message-----
> > From: Unidata netCDF Support [mailto:address@hidden]
> > Sent: Monday, July 19, 2010 6:15 PM
> > To: address@hidden
> > Cc: address@hidden; address@hidden;
> > address@hidden; address@hidden
> > Subject: [netCDF #ZCE-849683]: clarification on large file support
> >
> > Fabio,
> >
> > > Please, I need just one more clarification.
> > > I would like to use the netcdf library on windows 64 bit platform.
> > > In order to activate very large file support, is it sufficient to
> > > compile the library specifying a 64bit environment and activate the
> > > 64bit file offset. Doing so, does the size_t type automatically
> > > switch to a 64 bit unsigned integer variable in such a way not to
> > > limit memory and thus data set allocation. I am using your code and
> > > Visual studio 2008 and I'm specifing a 64 bit target environment.
> >
> > With a 64-bit development environment, there is no need to "activate
> > the 64bit file offset", and this depends on the off_t type, not size_t.
> >
> > All versions of netCDF since 3.6.0 include support for reading and
> > writing 64-bit offset netCDF files, whether compiled with a 32-bit or
> > 64-bit development environment, if the size of the C off_t type is at
> > least 8 bytes. The size of the off_t type for your 64-bit Windows
> > development environment is surely 8 bytes, but you can test this by
> > printing "sizeof(off_t)" in a small C program, and you might as well
> > also print the size of size_t to answer your other question:
> >
> > #include <sys/types.h>
> > #include <stdio.h>
> > int main(void) {
> > printf("Size of off_t is %d bytes.\n", sizeof(off_t)); printf("Size of
> > size_t is %d bytes.\n", sizeof(size_t)); return 0; }
> >
> > If either of these is printed as a value less than 8, make sure you
> > use whatever C compiler flags are needed for your compiler to specify
> > a 64-bit development environment.
> >
> > Note that there are still some limits to dataset size using the netCDF
> > 64-bit offset format, as described in the Users Guide and in the FAQ
> > on large file support, in particular the answers to the three questions:
> >
> > Have all netCDF size limits been eliminated?
> > Why are variables still limited in size?
> > How can I write variables larger than 4 GiB?
> >
> > http://www.unidata.ucar.edu/netcdf/docs/faq.html#Large%20File%20Suppor
> > t10
> >
> > As indicated in the answer to the second question above, the reason
> > variable sizes are still limited has to do with a desire to make sure
> > that netCDF 64-bit offset files files are still portable, even  to 32-bit
> platforms.
> >
> > --Russ
> >
> > >
> > >
> > > -----Original Message-----
> > > From: Unidata netCDF Support
> > > [mailto:address@hidden]
> > > Sent: Monday, July 12, 2010 7:28 PM
> > > To: address@hidden
> > > Cc: address@hidden; address@hidden;
> > > address@hidden
> > > Subject: [netCDF #ZCE-849683]: clarification on large file support
> > >
> > > Fabio,
> > >
> > > A little more clarification is needed to my last reply.  I said:
> > >
> > > Each variable in the file cannot exceed 4GB (not 2GB), in netCDF
> > > versions after 3.6.1, including the current netCDF 4.1.1. The actual
> > > maximum size of a variable on a 32-bit platform is (2^32 - 4) bytes.
> > > Part of the confusion is a documentation error here:
> > >
> > >
> > > http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#Classic
> > > -L
> > > imitat
> > > ions
> > >
> > > which I just discovered hasn't been updated since the size limit on
> > > a single variable was changed from 2GB (2^31 - 4) to 4GB (2^32 - 4)
> > > in versions since netCDF 3.6.1.
> > >
> > > I was confused and the original documentation is correct.  The
> > > netCDF classic format limits all but the last variable to 2GB in
> > > size.  It is the 64-bit offset file format that permits all
> > > variables to be 4GB in size, and the last variable to be even
> > > larger.  The netCDF-4/HDF5 format variant has no 4GB limits on the
> > > size of any variable.  Sorry for
> > the confusion!
> > >
> > > --Russ
> > >
> > >
> > >
> > > Russ Rew                                         UCAR Unidata Program
> > > address@hidden                      http://www.unidata.ucar.edu
> > >
> > >
> > >
> > > Ticket Details
> > > ===================
> > > Ticket ID: ZCE-849683
> > > Department: Support netCDF
> > > Priority: Normal
> > > Status: Closed
> > >
> > >
> > >
> > >
> >
> > Russ Rew                                         UCAR Unidata Program
> > address@hidden                      http://www.unidata.ucar.edu
> >
> >
> >
> > Ticket Details
> > ===================
> > Ticket ID: ZCE-849683
> > Department: Support netCDF
> > Priority: Normal
> > Status: Closed
> >
> >
> 
> Russ Rew                                         UCAR Unidata Program
> address@hidden                      http://www.unidata.ucar.edu
> 
> 
> 
> Ticket Details
> ===================
> Ticket ID: ZCE-849683
> Department: Support netCDF
> Priority: Normal
> Status: Closed
> 
> 

Russ Rew                                         UCAR Unidata Program
address@hidden                      http://www.unidata.ucar.edu



Ticket Details
===================
Ticket ID: ZCE-849683
Department: Support netCDF
Priority: Normal
Status: Closed