[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: which netCDF on T3E?



> I followed with interest Olaf Heudecker's question May 27 to netcdfgroup
>
>    - who has experience with netCDF 3.3 on cray t3e ?
>
> and I noted you are working on a Cray T3E to solve this problem.
>
> I am wondering if you are working with the parallel version of netCDF
> (the one built on Cray FFIO "Global I/O" library, provided by CRI's
> Steve Luzmoor, or are you working with the single-threaded version?
>
> Thanks.
>
> --Dick Valent

On cray MPP systems (t3d like antero and t3e),
there is a problem with the Fortran calls C bridge we
use to build the Fortran interface in netcdf-3.
In netcdf-3, we changed the sort of bridge we use
from a little home grown m4 system to a more general
and widely used system called cfortran.h.

The C interface is working fine on these systems.
We are working on a fix for the fortran to C problem and
it will be out soon (like early next week).

On non-Cray systems, i/o goes through the
POSIX open(), read(), write() ... calls.
On Cray systems, the netcdf-3 library makes i/o calls via
an ffio interface. (Of course, it is possible to set things up
to use POSIX i/o on the Crays as well.)

My understanding is that given use of the ffio interface,
the choice between Luzmoor's parallel i/o system and the
"usual" Cray ffio is made at link time. We have _not_ linked
or run any of the netcdf tests using anything but the default
libraries on t3d or t3e.

Unlike netcdf-2, netcdf-3 has a distinct internal i/o interface.
You can see what the interface looks like,
~davis/share/netcdf-3.3/src/libsrc/ncio.h on meeker.
This isolates the i/o and makes it much easier to use
different i/o implementations. (posixio.c and ffio.c are
two implementations provided.) The ncio interface is designed
with parallelism in mind. A 'get' operation translates
a file 'region' (an extent at a given file offset) into a
memory (void *) region which is read or modified. The region must
then be 'released' back to the system upon completion. This allows
for thread or process locking behind the interface as appropriate.
It also allows for asynchronous operations behind the interface.
We've had sucess using a similar interface and mmap() implementation
in another product.

I hope this answers your question.
This is probably more information than you really wanted, so
I'll stop here :-).

-glenn