[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 20050120: A question about the large files output associated with netcdf



Unidata Support <address@hidden> writes:

> ------- Forwarded Message
>
>>To: address@hidden
>>From: <address@hidden>
>>Subject: A question about the large files output associated with netcdf
>>Organization: The Hong Kong Polytechnic University
>>Keywords: 200501201330.j0KDUvv2008285 netCDF 3.6.0 large file
>
>
> -------8b17cb5f5a4790bed5cf86b202e2d639
> Content-Type: text/plain; charset=us-ascii
> Content-Transfer-Encoding: 7bit
>
> Dear Sir,
>
>      I am sorry that I have to bother you with this mail. I 
> am using Models-3/CMAQ which includes netCDF to convert data 
> format to do some research now. Well, since some output data 
> files are over 2G in the model, I have downloaded the new 
> version of netcdf (version netcdf-3.6.0) to generate a 
> new "libnetcdf.a" for the model. The UNIX system I am using 
> is SunOS5: SunOS5.8 Generic_108528-21 sun4u sparc SUNW,Sun-
> Fire.
>      Unfortunately, although "configure", "make test", 
> "make extra_test" and "make install" all ran normally, after 
> the new "libnetcdf.a" was added into the model, the same 
> error as before still occurred as soon as a model output 
> file size exceeded 2G:
>   
>      >>> WARNING in subroutine WRVARS <<<
>      Error writing variable NO2              to file 
> CTM_CONC_1
>      netCDF error number  -31
>
>
>  IOAPI file ID  15
>  netCDF ID      20
>  variable       2
>  dims array     1 1 1 41 0
>  delts array    112 72 21 1 0
>
>      Error writing NO2              to CTM_CONC_1       for  
> 2001066:160000
>
>      *** ERROR ABORT in subroutine WR_CONC
>      Could not write NO2              to CTM_CONC_1
>      Date and time 16:00:00 March 7, 2001   (2001066:160000)
> 46058.0u 171.0s 12:51:48 99% 0+0k 0+0io 0pf+0w
> exit
>    
>     Later, I tried to add "-xarch=v9" into CFLAGS, FFLAGS, 
> CXXFLAGS and so on in the "configure" file according to the 
> description for SunOS at the website 
> http://my.unidata.ucar.edu/content/software/netcdf/docs/netcd
> f-install/Platform-Specific-Notes.html#Platform-Specific-
> Notes, it still didn't work. The output from "configure" 
> related to large file setup is as follows:
>
> checking for special C compiler options needed for large 
> files... no
> checking for _FILE_OFFSET_BITS value needed for large 
> files... 64
> checking for _LARGE_FILES value needed for large files... 
> no  
>
> Well, in the "config.log" file, there are some related cache 
> variables and definition in "confdefs.h":
>
> ## ---------------- ##
> ## Cache variables. ##
> ## ---------------- ##
> ...................
> ac_cv_sys_file_offset_bits=64
> ac_cv_sys_large_files=no
> ac_cv_sys_largefile_CC=no
> ......................
>
> ## ----------- ##
> ## confdefs.h. ##
> ## ----------- ##
> ......................
> #define _FILE_OFFSET_BITS 64
> ....................
>     
>      It seemed that special C compiler options needed for 
> large files are not valid although I have added "-xarch=v9" 
> into all flags in "configure" file. I wonder whether more 
> modification to compliers' options need to do except for 
> those in the "configure" file. If it is true, would you 
> please do me a favor to give me some suggestions on how to 
> solve the problem in detail? I appologize sincerely 
> if this mail bring you any trouble. 
>  
> BTW: Attached is the output file from "make extra-test", 
> which is mentioned at your website. I hope it is useful to 
> solve the problem.
>
>
> Best regards,
>
> Cui Hong
>  
>
>
>
> -------------------- 
> Cui Hong 
> Department of Civil and Structural Engineering 
> The Hong Kong Polytechnic University 
> Hung Hom, Kowloon, Hong Kong 
> Fax: 852-2334 6389 
> Tel: 852-2766 4812 
> E-mail: address@hidden 
>
> -------8b17cb5f5a4790bed5cf86b202e2d639
> Content-Type: application/octet-stream;
>       name="extra_test.log"
> Content-Disposition: inline;
>       filename="extra_test.log"
> Content-Transfer-Encoding: base64
>
> Ck1ha2luZyBgYWxsJyBpbiBkaXJlY3RvcnkgL3V4bG9hbi9jZWN1aWgvbmV0Y2RmL3NyYy9m
> b3J0cmFuCgphciBjcnUgLi4vbGlic3JjL2xpYm5ldGNkZi5hIGZvcnQtYXR0aW8ubwkgZm9y
> dC1jb250cm9sLm8JIGZvcnQtZGltLm8JIGZvcnQtZ2VuYXR0Lm8JIGZvcnQtZ2VuaW5xLm8J

If make extra_test succeeded, then netcdf is successfully writing
files larger than 2 GiB.

In your code, are you using the NC_64BIT_OFFSET flag on nc_create?

Ed