[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[IDD #BKI-822215]: LDM feed sites



Hi Adam,

re:
> OK...
> 
> I have made my ldmd.conf like below.....

Very good.

> CONDUIT still seems to be having a
> problem.... However the latencies on all products seem to be spiking at the
> 0z,6z,12z,18z times (model releases)...

The rtstats plots show that CONDUIT is not the only datastream that
is having latency issues.  Even the IDS|DDPLUS feed is showing latency
spikes that are unexpected:

http://www.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?IDS|DDPLUS+tornado.geos.ulm.edu

Question:

- did you try the 5-way split for the CONDUIT feed?

> There still seems to be a bandwidth bottleneck somewhere but I'm not sure
> where.

I agree that there is a bottleneck somewhere.

> Can we test the throughput between my ldm and a machine over there?

Yes.  I am not sure what the results of this will be, however, as
the route to other machines will be virtually identical to that
for idd.unidata.ucar.edu.

Perhaps it would be better to try adding redundant ingest of
CONDUIT from an alternate upstream site.  Try replicating the
5-way split to idd.cise-nsf.gov.

NB: This _assumes_ that you are already doing a 5-way split of
your CONDUIT requests to idd.unidata.ucar.edu.  If you are not,
I would first start there.
  
> I just did a speed test of my own on the machine.  I issued the command
> 'nc -l 5000 > /dev/null' on the ldm server and 'pv /dev/zero |nc
> tornado.geos.ulm.edu 5000' on my machine in my office.  This gave me a
> maximum throughput of 95MB per second on the NIC doing TCP.  UDP was up
> around 110MB per second.
> 
> I then changed the command on the ldm server to 'nc -l 5000 >
> /data/tempdump' to test max file throughput.  It came out to ~45MB per
> second doing TCP.
> 
> Now...looking at my network usage while the LDM is running I never seem to
> go over 5MB per second.
> 
> We also don't do ANY packet shaping here on our campus...just a
> firewall...and it's rated for 10Gig throughput.
> 
> Any thoughts?

The bottleneck could be outside of your campus.  Adding the redundant
feed to idd.cise-nsf.gov might help to pinpoint where this bottleneck
is occurring.  Doing a multi-way split of the CONDUIT feed will help
to determine if the bottleneck is related to TCP's backoff strategy.

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: BKI-822215
Department: Support IDD
Priority: Normal
Status: Closed