[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

19990912: question about dcgrib



>From: weather <address@hidden>
>Organization: .
>Keywords: 199909121623.KAA00234

>I was timing some commands on our Ultrasparc(333Mhz 2MB cache) and 
>our Intel machine (400Mhz PII) both with Solaris 7, trying to
>determine which is faster overall (we are putting in for one
>more machine).  I came across an interesting fact, I timed (time command)
>a script that decodes an MRF grib file that I downloaded from
>NCEP FTP server.  The file in 23MB in size.  The Intel machine
>only took 26-28 seconds to perform the task (cat file | dcgrib PACK...)
>but the Ultrasparc took over 66-69 seconds.  I know that the Intel
>machine is faster with integer/non-floating point tasks, but I would
>have not expected this huge of a difference.  is there any
>chance that the recent problems with the Sun SC5.0 compilers
>under McIDAS also show up somehow when compiling GEMPAK?
>The Intel machine has SC4.2 compilers and the Sparc has 5.0.
>The Intel machine's OS and apps are on an IDE disk, with /var/data
>residing on a 7200rpm UW-SCSI disk, the Sparc has two 7200 rpm
>UW-SCSI disk.  I know this is sort of a trivial thing, but
>I have to make sure I get the most bang for our buck.  I bought
>the Ultrasparc because I was unsure about the future of x86.
>(I don't want to go to Linux)
>That seems to be a little more secure now, but to be
>on the safe side I would rather stick with the Sparc,
>but not with that kind of performance sacrifice. We have been fortunate
>to be able to obtain good hardware recently, but that could quickly
>change and we could easily be back to the way we were 3 years ago
>where no upgrades are possible unless the thing catches on fire.
>I know the McIDAS compile is about 40-50% faster on the Intel
>machine but that is to be expected, a >100% difference with the
>decoding is way out.
>
>Thanks,
>Robert Mullenax
>
>NSBF Meteorology
>


Robert....

I'm glad that dcgrib is less than a complete pig on anything with
those huge 1x1 degree global grids. 

I may have left the -g flag in COPT and FOPT in the $NAWIPS/config/Makeinc.xxx
files that get distributed since I usually compile everything with debuging
on for testing- so you might check that. Solaris is pretty safe to compile
with -O, although really agressive optimization like -O4 will probably break
since some of the fortran routines use the same input and output variable
names and agressive optimization will collapse those to a single variable
which will stomp on itself. Also, that incremental loading feature for the 
Sparcworks compilers is a real pain, so I generally leave the -xildoff flag
defined- so you can compare the Makeinc files on that point too.

The biggest hit with dcgrib is generally for the thinned and MRF IDD grids
which are transmitted in pieces (not the case with the oned you refer to).
When grids are transmitted in quadrants or octants, they get decoded
and written to disk. The next peice will get decoded and the entire
grid has to be read back in so that the new tile can be added to the
grid and written back out. Thus, there is a lot of additional disk-IO in 
those cases.

Steve Chiswell