[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[netCDF #CFH-512758]: Rough Order of Mangitude Numbers



Barry,

> Since the original question was about a 100 x 100 16-bit integer
> array. Can I linearly extrapolate the 0.02 seconds that you provided to
> get .13 seconds for a 256 x 256 array and 2.1 seconds for a 1024 x 1024
> array? Or is the extrapolation non-linear and what would the correct
> numbers be?

The time goes up linearly as the number of values to be written.  The
amount of data you're asking about is too small to extrapolate very
well, but I did a few more quick timings.

You originally asked for only order of magnitude estimates, and that's
what I provided, based on timing a simple shell script that uses the
nccopy utility on a netCDF file containing a 100 x 100 16-bit integer
array and 20 small attributes.  The nccopy utility just reads the
netCDF file and writes a new copy, only using the netCDF library for
all I/O.  So the 0.02 sec turned out to be within an order of
magnitude, but an overestimate, including some overhead for starting a
shell script, invoking the nccopy program a bunch of times, and
reading the input netCDF file before copying it.  The 0.02 seconds
resulted from from dividing the total time for the shell script by the
number of times nccopy was invoked.

Doing the same thing for 256 x 256 shows it only takes about 0.03 sec
for each copy, which is probably more accurate than the 0.02 sec for a
100 x 100 write.  For a 1024 x 1024 array, the time is about 0.11 sec
for each write of a 1024 x 1024 array of shorts.

That's still a bit of an overestimate, but should give you a more
accurate idea of the real cost of writing such data to files.

--Russ

Russ Rew                                         UCAR Unidata Program
address@hidden                      http://www.unidata.ucar.edu



Ticket Details
===================
Ticket ID: CFH-512758
Department: Support netCDF
Priority: Normal
Status: Closed