[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 980106: Support for 64bit longs in netCDF 3.x?



>To: address@hidden (Russ Rew)
>From: address@hidden (Ethan Alpert)
>Subject: Re: 980106: Support for 64bit longs in netCDF 3.x?
>Organization: SCD
>Keywords: 199801061814.LAA10344

Ethan,

> > No, sorry, netCDF has no support for external storage of 64-bit longs,
> 
> bummer. I'll just have to put an error message in when people try to
> write 64bit longs from NCL.

I think it would be better to allow them to write longs, but produce an
error message only in case you get an error return from the
nc_Put_var_long() call because it couldn't convert the provided long(s)
to 32-bint ints.  That way, things will work fine for longs just used to
hold 32-bit integers, which may be the case for programs intended to be
portable to multiple platforms.

> > Since some C/C++ compilers don't yet support 64-bit ints, writing 64-bit
> > int values into a file would be a portability problem, because the value
> > couldn't be read into a scalar variable on such a platform.
> 
> Well we've got an SGI challenge whose compilers support 64-bit ints,
> and this is the reason I'm asking. If you declare a long on this
> machine you get 64 bits.

Yes, we have some SGI machines with 64-bit ints too.  I meant to say
that netCDF has to use a lowest common denominator approach to data
types, since we want netCDF files written from an SGI to be readable on
other platforms that don't have 64-bit ints.  Until the majority of
platforms on which netCDF data is used can deal with 64-bit ints, it
would be better for users if netCDF didn't support them.

> > For most purposes, a double is a better type than a 64-bit long for
> > numeric values, since it can represent a much wider range with a very
> > high precision.  I'm curious what application you have for 64-bit ints
> > as data values in a netCDF file (digital signatures? bit sets?).
> 
> I've got users who want to use 64 bit integers simply because the machine 
> architecture and NCL (NCAR Command Language) handle it. ...

Great, in that case using the available nc_put_var_long() calls will
probably work fine, since they're only using the type because it's
there, not because they need more than 52 bits of precision.

>                                                         64 bit integers are 
> probably best for representing time as an offset from a date. You can have a 
> resolution of nano-seconds for over 10000 years.

Yes, that's the example that has come up most often when I look back at
the time convention discussions at
<htp://www.unidata.ucar.edu/packages/netcdf/time>.  Currently, people
who need that much precision typically use one integer for Julian day
and another for nanosecond within Julian day, or some such convention.
Double precision gives less than a year's worth of nanoseconds.

> I wouldn't say doubles are as good as 64 bit longs though.
> There are many integers that are representable with 64 bits which
> become truncated with the 52 bit mantissa of double precision numbers.

I actually didn't say doubles were as good as 64-bit ints, just that
they "can represent a much wider range with a very high precision".  But
more than the 17 significant digits of precision available with IEEE
doubles seems to me to be something scientists would rarely need.  I
guess time is the only thing we can measure with that much accuracy ...

--Russ