Arthur,
> Daryl et. al. --
>
> I've had similar thoughts. When I brought up our new ldm server (P3 866,
> 320MB, SCSI, RH 7.1, LDM 5.1.3) I tried to make a large queue (don't
> remember how big) but found it to not allow queues larger than 2 GB.
> Don't know what I was doing wrong since you've been able to get one to be
> 4 GB. The other problem I had was that when I made a queue close to 2 GB,
> the system ran for a day or so and then went into an I/O thrasing
> condition which essentially jammed the system. I haven't figured that one
> out yet; perhaps I will try installing the latest kernel patch sometime
> and see if that makes a difference. I currently have a 1GB queue
> (although 1.5 or so works okay I think) which holds about 3 hours of data.
The Redhat errata page has lots of bugfixes, which has remarkably improved
my system performance...especially the new kernel.
> Since disk/storage is so cheap, having a large queue seems to make sense
> these days. However, I would agree with Unidata's concerns that if there
> was a larger scale outage such that many sites started trying to catch up
> on many hours worth of data, this could present a networking load issue.
> Hopefully this kind of situation wouldn't occur very often, though.
>
> Art.
Maybe we can just restrict that to sites feeding others? Or if you're in
paranoid mode, just top nodes (like UIUC, SSEC, etc)? In either case,
a note to Daryl...thanks for the great public service. If I need it, I'll
use it, but only in an emergency.
*******************************************************************************
Gilbert Sebenste ********
Internet: gilbert@xxxxxxx (My opinions only!) ******
Staff Meteorologist, Northern Illinois University ****
E-mail: sebenste@xxxxxxxxxxxxxxxxxxxxx ***
web: http://weather.admin.niu.edu **
Work phone: 815-753-5492 *
*******************************************************************************