[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[CONDUIT #QWD-621691]: GFS upgrade



Hi David, Art, and Pete

David wrote:
> Sorry, missed the original message as I've been out of the office this
> week as well. Checking...

Thanks!

Art wrote:
> Sorry I didn't get back to you sooner on this.  I think we were at the point
> of being ready to flip the switch on this, and as far as I can tell, things
> seem to be working okay.  Latencies look good, volume is up.  I have to take a
> look at our queue sizes as 32GB may still be too small.

We upped the LDM queue size on all of the real server backends of 
idd.unidata.ucar.edu
to 40 GB.  These machines have enough RAM to allow us to further increase the 
queues
to 50 GB if needed.

re: 
> When the gfs 0.25 degree
> comes blasting in, it's going to chew up queue pretty fast for the hour-or-so 
> that
> it's coming in.

Yes, it does so in a BIG way. :-)

re:
> Do you still recommend a minimum product life of 60 minutes in the
> queue, or can that shrink a bit in light of the short-term demand of data 
> like the
> 0.25 degree gfs?

We like for top level relays to provide an hour buffer for the feeds being
relayed.  This gives downstreams the flexibility to take machines off
line for maintenance/reboot/etc. as long as they are not off line too
long.

David and Art:

FYI:

Each of our real server backends of idd.unidata.ucar.edu has
the potential for feeding more than 1 Gbps to the downstream
connections it services.  Because of this we were forced to
bond two Gpbs connections on each machine so that more than
1 Gbps could be moved.  Depending on how many downstream
connections you machine(s) are servicing, you may be forced
to do the same kind of Ethernet bonding. Mike Schmidt, our
system administrator, can advise you on what is needed to bond
two Gbps Ethernet ports and what is needed on the switch side
when the bonding approach is taken.  The need to do channel
bonding would disappear, of course, if our machines had
10 Gbps Ethernet interfaces.

Of interest: the aggregate output of of all of our cluster
backends exceeds 4 Gbps, and the number of downstream REQUESTs
that are being serviced is to over 900!

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: QWD-621691
Department: Support CONDUIT
Priority: Normal
Status: Closed