[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: test case for netCDF/NFS write



>To: address@hidden
>From: Steve Carson <address@hidden>
>Subject: Re: 20020128: test case for netCDF/NFS write
>Organization: RAP
>Keywords: NFS performance test

Steve,

> I think I got it there. It's called "netCDF_NFS_example1.tar".
> There is also a file called "netCDF_NFS_example.tar", of zero
> size, created by my first attempt from a machine that did not
> have 'ncftp'. Regular ftp has a problem getting stuff out through
> the NCAR firewall.
> 
> The tar file is about 22MB. That's because it includes an
> example MM5 output file. When you run 'mm5_to_nc', you'll
> need about another 13MB of space for the output file.
> 
> Check the README file for details, and let me know if I forgot
> anything. And let me know if you'd like me to send an e-mail
> about this episode to 'support@unidata' for tracking purposes.

I ran your test in a local directory (/buddy/russ on buddy) and
NFS-mounted directories on two other machines in our machine room
(/scratch/russ on zero, /data/tmp on shemp).

  $ df -k /buddy/russ /scratch/russ /data/tmp
  Filesystem            kbytes    used     avail capacity  Mounted on
  /dev/dsk/c1t1d0s7    13838253  1905818 11794053   14%    /buddy
  zero:/export/scratch  5546862  5414219   77175    99%    /scratch
  shemp:/export/data   17658298 13777076 3704640    79%    /data

The machines buddy, zero, and shemp are all Solaris platforms, running
"uname -a" on each machine gives:

  SunOS buddy.unidata.ucar.edu 5.8 Generic_108528-12 sun4u sparc 
SUNW,Sun-Blade-1000
  SunOS zero.unidata.ucar.edu 5.6 Generic_105181-30 sun4u sparc SUNW,Ultra-2
  SunOS shemp.unidata.ucar.edu 5.7 Generic_106542-18 i86pc i386 i86pc

Here's the results of running your test script 3 times on each test

  $ ./test.mm5_to_nc.sh 
  Reading from :'./'
  Writing to   :'./'
  2.15u 1.32s 0:10.56 32.8%
  nfstest$ ./test.mm5_to_nc.sh 
  Reading from :'./'
  Writing to   :'./'
  1.97u 0.81s 0:09.68 28.7%
  nfstest$ ./test.mm5_to_nc.sh 
  Reading from :'./'
  Writing to   :'./'
  2.09u 0.75s 0:10.24 27.7%
  nfstest$ ./test.mm5_to_nc.sh /scratch/russ
  Reading from :'./'
  Writing to   :'/scratch/russ'
  2.12u 0.66s 0:13.39 20.7%
  nfstest$ ./test.mm5_to_nc.sh /scratch/russ
  Reading from :'./'
  Writing to   :'/scratch/russ'
  2.09u 0.53s 0:12.04 21.7%
  nfstest$ ./test.mm5_to_nc.sh /scratch/russ
  Reading from :'./'
  Writing to   :'/scratch/russ'
  2.18u 0.53s 0:12.04 22.5%
  nfstest$ cp MMOUT_DOMAIN_4km.20020105.060000 /scratch/russ
  nfstest$ ./test.mm5_to_nc.sh . /scratch/russ
  Reading from :'/scratch/russ'
  Writing to   :'.'
  2.13u 0.79s 0:07.68 38.0%
  nfstest$ ./test.mm5_to_nc.sh . /scratch/russ
  Reading from :'/scratch/russ'
  Writing to   :'.'
  2.06u 0.72s 0:10.91 25.4%
  nfstest$ ./test.mm5_to_nc.sh . /scratch/russ
  Reading from :'/scratch/russ'
  Writing to   :'.'
  1.96u 0.68s 0:09.14 28.8%
  nfstest$ ./test.mm5_to_nc.sh /data/tmp /scratch/russ
  Reading from :'/scratch/russ'
  Writing to   :'/data/tmp'
  ERROR - umalloc
  Cannot perform malloc, size = 1905120
  Program will now segv for debugging.
  1.16u 0.19s 0:02.78 48.5%
  nfstest$ ./test.mm5_to_nc.sh /data/tmp /scratch/russ
  Reading from :'/scratch/russ'
  Writing to   :'/data/tmp'
  2.14u 0.70s 0:20.03 14.1%
  nfstest$ ./test.mm5_to_nc.sh /data/tmp /scratch/russ
  Reading from :'/scratch/russ'
  Writing to   :'/data/tmp'
  2.09u 0.60s 0:14.70 18.2%
  nfstest$ ./test.mm5_to_nc.sh /data/tmp /scratch/russ
  Reading from :'/scratch/russ'
  Writing to   :'/data/tmp'
  ERROR - umalloc
  Cannot perform malloc, size = 1905120
  Program will now segv for debugging.
  0.82u 0.46s 0:07.76 16.4%

The first time I got the "ERROR - umalloc" I was a little surprised
since buddy has a Gbyte of memory, but I deleted some big files in
/tmp which is mounted in swap space to see if that would clear up the
problem, and maybe that fixed it.  But then the same message occurred
again a couple of runs later.  I tend to have a lot of applications
running in a lot of windows (emacs, netscape, FrameMaker, Java apps,
exmh, ...), but I don't know if that explains it.

Also, I should tell you that the machines zero and shemp are pretty
busy.  Especially zero, which is our main file server, has lots of
disk activity.  The shemp system is an experimental data server on
which we ingest lots of real-time data from radars, models, and
bulletins.

Given how busy zero and shemp are, the above timings look OK to me,
and don't seem to indicate any great slowdown using NFS.  I asked Mike
Schmidt about whether any special tuning had been done for our NFS
setup and he said nothing much, just keeping patches up to date.  But
feel free to contact him about specific configuration questions.

--Russ