[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: netCDF stuff



> Organization: National Energy Research Supercomputer Center (NERSC)
> Keywords: 199402081848.AA23753

Hi Chris,

> Hello, I have been working with NetCDF 2.3.2 on Crays, HP-9000's, Suns &
> SGI's.  I haven't pulled over the tar file from your server since August,
> but I don't think its been updated recently.  There are some things that I
> would like to mention:
> 
> 1.  When re-entering define mode, a scratch file is created of the form
> 'aaaasomthing'.  This file is created in the current working directory.
> If you do not have write access to the current working directory, the
> scratch file is not created and the routine returns an error.  I would
> like to suggest that any scratch file be created in the same directory
> as the re-defined file, by grabbing some of the pathname to the file.

The scratch file is supposed to be created in the same directory as the
re-defined file, and that's what I think the function NCtempname in
netcdf/libsrc/file.c does.  It uses SEP (defined as "/" for UNIX systems) to
search backwards in the filename until it gets the directory and uses that
for the temporary file name.  This seems to work fine on our Sun system, I
just tested it.  The temporary file "aaaa2134" was created in the directory
of the open netCDF file, not the current working directory.  If you could
provide us with an example that demonstrates the behavior you are reporting,
where a call to ncredef() tries to create a scratch file in the current
working directory rather than the directory of the open netCDF file, we
would be able to diagnose and fix the bug.  Perhaps the "SEP" macro is not
getting defined on one of the systems for which you built netCDF (though I
can't see how this could happen from looking at the code in libsrc/file.c).

> 2.  When re-entering define mode, the routine used to generate the
> scratch file name is 'NCtempname'.  In this routine, there is a test
> to see if the length of the generated name is less than FILENAME_MAX.
> on HPUX  A.09.01 on our HP9000-755, FILENAME_MAX is set to 14, which seems
> to me to be way too small (we've sent a message to HP regarding this).
> The net result is that NCtempname cannot create the scratch file name if
> your current working directory pathname is > 14 characters.  If this is
> the case, NCtempname returns a null string, the temporary file is not
> created, and 'ncredef' returns an error.
> 
> Perhaps NCtempname should only apply the test using FILENAME_MAX to the
> scratch file name, not the whole pathname to the scratch file (which is
> what I think its doing).

This is apparently a bug in HPUX that we didn't encounter in our testing of
netCDF on an HP-9000/7xx system under HPUX 9.0.  Was it introduced in a
later upgrade of the operating system?  The "nctest" program, which tests
ncredef() fairly extensively, was run successfully on the HP using pathnames
longer than 14 characters (.e.g
"/u1/unidata/netcdf-2.3.2/src/nctest/test.nc"), so I'm pretty sure things
worked OK with that version, though I see that /usr/include/stdio.h on the
HPUX system we test on still has FILENAME_MAX defined as 14.  We can't
change the behavior of the netCDF library on other systems to work around an
HPUX bug, but perhaps we should provide a special case in libsrc/local_nc.h
that redefines FILENAME_MAX as 255 for HPUX.  I'll look at that when we test
the next release on HPUX.

> 3.  On the Cray YMP-C90 here at NERSC, one of our NetCDF users did some
> profiling of his code. He found that the time spent in 'xdr_stdio_create',
> 'xdr_double', and 'fwrite' was taking more time than the computational
> portion of his code.  After looking at the NetCDF source, it seems to me
> that fwrite must be called from the xdr routines.  In the file 'array.c'
> it seems that things like 'xdr_double' are being called in a loop,
> translating one element at a time.  So, if each element of an array is
> being translated then written one at a time, a lot of overhead is incurred
> in function calls.
> 
> I have no documentation on the xdr routines other than the man page, which
> doesn't describe the routines or their parameters.  I do see a routine
> named 'xdr_array'.  Would it be faster? 

No, xdr_array just makes multiple calls for each value of type xdr_double in
a loop, just as we do.  We have requested that Cray provide a better
optimized version of their XDR library, but so far with no result.  There
has been discussion on the netcdfgroup mailing list of how to vectorize the
Cray XDR library, but as far as I know no one has tackled the problem.  We
can't work on it, because Unidata's mission does not include getting our
software to run fast on Cray's.  If you're interested, you can use our
mailing list postings on this subject

> 4.  Otherwise, I have had a great deal of success with NetCDF, and am
> trying to evangelicize its use here at LLNL.  The atmospheric & climate
> modeling people are already using it, I am trying to convince people in
> Magnetic Fusion and Laser research to use it as well.
> 
> I am working with some people here at LLNL on a utility that reads DRS,
> NetCDF GrADS, and HDF files.  It is able to pull out hypercubes of
> variables and send them over the network to packages such as AVS, IRIS
> Explorer, and IDL.  From looking at your gopher server, there is info on
> public-domain packages that use NetCDF ( for example, Envision, which has
> some similarity to what we're doing).  We would like to include a
> reference to our utility when it is made available.  Who would I contact
> about adding to your gopher info?

Please send me information on it and I will include the description in our
list.  Thanks for the feedback.  (And say hello to Kirby Fong from me.)

__________________________________________________________________________
                      
Russ Rew                                              UCAR Unidata Program
address@hidden                                        P.O. Box 3000
(303)497-8645                                 Boulder, Colorado 80307-3000