[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[IDD #QRG-550150]: iperf tests again



Art,

If there are incremental or transient network changes, I may not be aware
or even in the loop.  One obvious change is that uni5 is currently serving 
approximately 200 downsteam IDD connections whereas it had no IDD load
during our last test.  Please try an iperf test against uni2.

I've been making gradual changes to our IDD cluster nodes to reflect the
information we've learned together on our tests, however we're in a bit
of a precarious position with regard to power in our computer room.
A fair number of our electical circuits are close to load limits and we
don't have the UPS capacity to run all of the systems we need to.  As I
switch test nodes in and out of production, I have to carefully make 
changes to the electrical layout so we can continue to power the 
production IDD nodes in the event of a power hit.

We've had a work request in to our facilities department for over a year,
but we don't seem to register on their list.  I'm assured the work will
be done soon so I don't have to tiptoe around to avoid a problem.
Thanks for your patience.  I'll run a few npad tests on our end and
report back.

mike

> Mike,
> 
> Our throughput from idd.unidata.ucar.edu to idd-ingest.meteo.psu.edu has
> taken a turn for the worse.  I'm estimating latencies in excess of 3600
> seconds since 12Z 10/14/06 (Saturday morning).  Something happened in the
> network, it seems, between the 6Z and 12Z runs Saturday.  Here's what I'm
> getting for iperfs to yakov and uni5 from ldm.meteo.psu.edu for 3
> successive runs:
> 
> ------------------------------------------------------------
> Client connecting to yakov.unidata.ucar.edu, TCP port 5001
> TCP window size: 8.58 MByte (default)
> ------------------------------------------------------------
> [  3] local 128.118.28.12 port 55567 connected with 128.117.156.86 port 5001
> [  3]  0.0-10.3 sec  73.7 MBytes  60.2 Mbits/sec
> 
> ------------------------------------------------------------
> Client connecting to yakov.unidata.ucar.edu, TCP port 5001
> TCP window size: 8.58 MByte (default)
> ------------------------------------------------------------
> [  3] local 128.118.28.12 port 55569 connected with 128.117.156.86 port 5001
> [  3]  0.0-10.0 sec  35.0 MBytes  29.3 Mbits/sec
> 
> ------------------------------------------------------------
> Client connecting to yakov.unidata.ucar.edu, TCP port 5001
> TCP window size: 8.58 MByte (default)
> ------------------------------------------------------------
> [  3] local 128.118.28.12 port 55570 connected with 128.117.156.86 port 5001
> [  3]  0.0-10.0 sec  39.8 MBytes  33.4 Mbits/sec
> 
> ------------------------------------------------------------
> Client connecting to uni5.unidata.ucar.edu, TCP port 5001
> TCP window size: 8.58 MByte (default)
> ------------------------------------------------------------
> [  3] local 128.118.28.12 port 55574 connected with 128.117.140.115 port 5001
> [  3]  0.0-10.0 sec  59.0 MBytes  49.3 Mbits/sec
> 
> ------------------------------------------------------------
> Client connecting to uni5.unidata.ucar.edu, TCP port 5001
> TCP window size: 8.58 MByte (default)
> ------------------------------------------------------------
> [  3] local 128.118.28.12 port 55575 connected with 128.117.140.115 port 5001
> [  3]  0.0-10.3 sec  52.4 MBytes  42.6 Mbits/sec
> 
> ------------------------------------------------------------
> Client connecting to uni5.unidata.ucar.edu, TCP port 5001
> TCP window size: 8.58 MByte (default)
> ------------------------------------------------------------
> [  3] local 128.118.28.12 port 55576 connected with 128.117.140.115 port 5001
> [  3]  0.0-10.0 sec  72.3 MBytes  60.5 Mbits/sec
> 
> The yakov results seem similar to what they were before, but the uni5
> results seem much worse.  I did an iperf test to hoover.ems.psu.edu and
> got 855 Mbps.  I also did an NPAD test to kirana.psu.edu and got 540 Mbps,
> so it would seem our path at least as far as PSC, is performing reasonably
> well.
> 
> Have any further tests been performed on your cable plant or other
> infrastructure?  Can you take a look at things from your end again to see
> if any new problems are showing up?
> 
> 
> Thanks.
> 
> Art



Ticket Details
===================
Ticket ID: QRG-550150
Department: Support IDD
Priority: Emergency
Status: Closed