[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 20020218: netCDF-3.5.1-beta on NEC SX-5



>To: address@hidden
>cc: address@hidden
>From: Len Makin <address@hidden>
>Subject: netCDF-3.5.1-beta on NEC SX-5
>Organization: CSIRO
>Keywords: 200202180541.g1I5f6x01053 netCDF-3.5.1-beta NEC SX-5

Hi Len,

>       This I think is a request for advice and or reassurance.
> Building from 3.5.1-beta source on NEC SX-5,
> everything compiles with lots of warnings as usual for 64 bit
> machines. gmake check goes well until the following:
>
> ...................
> Making `test' in directory /stmp/cslkm/netcdf-3.5.1-beta/src/ncgen
>  
> gmake[2]: Entering directory `/stmp/cslkm/netcdf-3.5.1-beta/src/ncgen'
> cc -o ncgen -h2 -hmath vector  main.o load.o ncgentab.o escapes.o getfill.o 
> init
> .o genlib.o ../libsrc/libnetcdf.a  
> ./ncgen -b -o c0.nc c0.cdl
> ../ncdump/ncdump -n c1 c0.nc > c1.cdl
> *** ncgen -b test successful ***
> ./ncgen -c -o ctest0.nc c0.cdl > ctest.c && \
> cc -o ctest -h2 -hmath vector  -I../libsrc -I.   ctest.c 
> ../libsrc/libnetcdf.a  
> && \
> ./ctest && \
> ../ncdump/ncdump -n c1 ctest0.nc > ctest1.cdl
> "ctest.c", line 27: warning: improper integer precision : op "="
> 21c21
> <               i:d = -1.e+308, 0., 1.e+308 ;
> ---
> >               i:d = -9.99999999999999e+307, 0., 9.99999999999999e+307 ;
> 101c101
> <               :Gd = -1.e+308, 0., 1.e+308 ;
> ---
> >               :Gd = -9.99999999999999e+307, 0., 9.99999999999999e+307 ;
> 128c128
> <  dr = -1e+308, 1e+308 ;
> ---
> >  dr = -9.99999999999999e+307, 9.99999999999999e+307 ;
> 140c140
> <  d1 = -1e+308 ;
> ---
> >  d1 = -9.99999999999999e+307 ;
> 152c152
> <  d2 = -1e+308, 1e+308 ;
> ---
> >  d2 = -9.99999999999999e+307, 9.99999999999999e+307 ;
> 164c164
> <  d3 = -1e+308, 0, 1e+308 ;
> ---
> >  d3 = -9.99999999999999e+307, 0, 9.99999999999999e+307 ;
> *** ncgen -c test failed ***
> ......
> Question is "Is this difference important?".

I don't think so, though the tests were designed to use numbers that
should translate between external and internal floating-point
representations exactly and portably for all the platforms we knew
about.  This looks like a 1-bit error in the least significant bit of
a double-precision representation, so it's probably ignorable.  It may
also just be a minor problem in the way double-precision numbers are
converted to text strings for printing by the NEC runtime I/O
libraries.

Here's another reply to a similar question that may be relevant:

  http://www.unidata.ucar.edu/glimpse/netcdf/3636

--Russ

_____________________________________________________________________

Russ Rew                                         UCAR Unidata Program
address@hidden                     http://www.unidata.ucar.edu