I think I understand your proposal.
It seems to require that
1. All generic, value-manipulating netCDF programs be coded so
as to handle what amounts to a new, primitive datatype; and
2. A convention be adopted to indicate when a "scalar" actually
Though I can see the advantages of this scheme, it is not clear to me
that they outweigh #1 above, or that the scheme is even necessary.
To me, having to code all value-manipulating netCDF programs so that
they can handle multi-component scalars is undesirable for the following
1. In general, such programs would be slower than their simple
2. The coding would be more difficult as neither Fortran nor C
support such values in the language (Fortran does support
complex numbers, but they follow a different type of arithmetic
than the proposed variables).
The proposal also seems to run counter to the apparent industrial trend
of getting away from multi-component scalars and increasing precision
instead. In this vein, I note that most chip manufacturers are
adopting the IEEE floating-point standard.
I also believe (at least currently) that such multi-component scalars
are not necessary. Getting back to a concrete example, what would be
wrong (in the sense of being disadvantageous or inconvenient) of the
following definition for time in the dataset you gave:
time = UNLIMITED;
time:units="milliseconds @ (1992-2-12 07:58:27 -700)";
The 53 bits of precision that one is guaranteed in a netCDF double is
sufficient for approximately 300,000 years of such observations.
Furthermore, the above allows generic, value-manipulating netCDF
programs to be coded based upon the assumption of simple scalar
values. Because such values are directly supported by the programming
languages, the programming process is relatively simpler.
Steve Emmerson <firstname.lastname@example.org>