[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[McIDAS #KDN-271049]: /home/data/mcidas/images isn't scouring, filling disk

Hi Gilbert,

> > Here is an update on weather:
> >
> > 1) turned on processing of the NEXRAD Level III images in
> >   ~ldm/etc/pqact.conf_mcidasB
> >
> > 2) changed the crontab entry for scouring the Level III products
> >   to match where they are being written by the entry in
> >   ~ldm/etc/pqact.conf_mcidasB

re: NEXRAD Level III filing on weather

> I had to shut this off, as the load average got too high on weather.

Hmm...  This is interesting given that the processing of NEXRAD Level III
products is to FILE them.  I could imagine that your I/O wait could go up,
but I wouldn't expect the CPU load to go up.

> So, I propose we change the official McIDAS servers here to weather2 and
> weather3.

That is what has been listed for quite some time in:

Publically Accessible McIDAS ADDE servers

Turning off NNEXRAD processing on weather is simple, AND I see that you have
already done it!  To finish off the configuration, I just logged into weather
as 'mcidas' and:

- pointed at weather2 for RTNEXRAD data:

  <as 'mcidas'>
  cd $MCDATA
  dataloc.k ADD RTNEXRAD weather2.admin.niu.edu

- removed the ADDE definitions for the RTNEXRAD dataset from the server
  mapping table, $MCDATA/RESOLV.SRV

> You should also know that, pending state budget resolution,
> I'll be able to replace both machines in a few months with Pentium quad
> cores and a Q6700 motherboard with 4 GB of RAM and 750 GB SATA hard drive.
> That should make things slightly faster. ;-) Then we can turn on
> everything with no problem...I hope.

I, like you, am a hardware junkie :-)  At the same time, I have to say that
your machines are no slouches.  weather3, which you indicated does not
have the processing power needed, is a dual 3 Ghz Xeon box (or, at least
a single, hyperthreaded 3 Ghz box).  weather, on the other hand has a single
3 Ghz processor.  Given the hardware I see, I would think that weather would
struggle more than the other two machines, BUT I would not expect it to
struggle much.  I guess the question to be answered is if you are using all
three machines as workstations (meaning you login and run X applications)?
One of the biggest loads on any machine is X Windows -- it is a HUGE memory
user.  If you don't need to run display applications on weather, you could
save lots of memory AND CPU by not running X windows.

re: adjusted ADDE dataset definitions in $MCDATA/RESOLV.SRV

> Good. Thanks!

No worries.  As I said above, I just modified it again to remove the definitions
for RTNEXRAD since you will not be making that data available through ADDE.

re: removed scouring of the /home/ldm/logs directory
> OK.

This will actually lessen your load a tiny bit.

re: you are processing NEXRAD Level II data on weather; do you mean to
keep doing this?

> I only have ~7 sites currently being fed to it.

Yes, but those stations are using disk and CPU to unpack.

re: questions for weather3; do you have any special comments:

> No, I want to keep GEMPAK processing going.

OK.  I will adjust/remove McIDAS processing so that there is no duplication
of disk or CPU use.  I will also adjust the ADDE serving of the data to
use the GEMPAK directory structure.  McIDAS is very flexible in how it can
serve data.

> But, I can't have NIMAGE go
> to it, as it also bogs down the server and I lose data as the load shoots
> to 10.

I must tell you that I am mystified about this.  I remember you commenting
about this situation back with one of the LDM upgrades, but I must say that
I never have been able to imagine a reason for the high loads.

I propose that we investigate this situation.  In order to do this, I need
you to install the RPM that includes the 'iostat' application.  I will then
install a Tcl-based script we run out of our crontabs on all machines running
the LDM.  This produces log entries that can be used to profile your system
use on a minute-by-minute basis.

re: processing on weather3

> I think for now, I have to keep things as is.

OK.  Like I said above, I will adjust things on the McIDAS side to remove
duplicated decoding and redundant disk use.  This should cut down on CPU
and disk usage.

> I wanted new machines last
> year, but as it turns out, I wouldn't have been able to get the quad
> cores. All i can say for now is hang in there. I will let you know when I
> get the new machines, and I *think* they'll be able to handle it with 2.8
> GHZ servers

I would say that weather3 should easily be able to handle the processing
load you have on it AND file the NIMAGE products.  The fact that it can't
leads me to suspect that something is wrong somewhere.  The thing to do is
find out where the problem(s) is(are) and fix it(them).

> (I plan to overclock to 3 GHZ since Intel says they'll do that
> without overheating or causing problems). What do you think?

I can't see how what you have right now in weather3 is not able to keep up
with what you are trying to do.  We decode literally everything in all
IDD feeds on a dual Athlon 2400+ machine with 2 GB of RAM.  The bottlenecks
we see are mostly related to disk I/O, not CPU.  By the way, the biggest
hog of system resources is the processing of the NEXRAD Level II data.
Uncompressing the files is a HUGE resource hog.  Since GEMPAK doesn't need
the files to be uncompressed for use, we stopped uncompressing them and
switched to simply filing them.

As far as overclocking...  I would be very careful about doing that as it could
lead to some hard to diagnose problems down the road AND lessen the usable
life of your machine.


Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
Unidata HomePage                       http://www.unidata.ucar.edu

Ticket Details
Ticket ID: KDN-271049
Department: Support McIDAS
Priority: Normal
Status: Closed

NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.