[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[IDD #QRG-550150]: iperf tests again



Art,

> > Yes, I can finally get through to kirana.  It seems they've rotated off all
> > of the history we had from the last test, so I'll run a few more.  burn is
> > an untuned Windows server that I tried an IE client on, so disregard the
> > associated report.  Which idd downstream system and/or feedtype are we
> > talking about so I can look at the stats?
> 
> ldm.meteo.psu.edu CONDUIT... the tests I did to kirana and npad.ucar last
> week are still there as of this morning.

What I should have said was all the testing we did against kirana in May 2006
when we last discussed iperf/npad testing has been rotated off.

> > Also, can you forward your
> > ldmd.conf request lines?  I'm looking at the stats for ldm.meteo and
> > wonder if you are splitting the CONDUIT feed into multiple TCP connections?
> 
> I don't believe so... here are the lines:
> 
> request PPS|DDS|IDS|FSL2|DIFAX ".*" thelma.ucar.edu
> request SPARE ".*" thelma.ucar.edu
> request HDS ".*" thelma.ucar.edu
> request NMC2 ".*"       idd.unidata.ucar.edu
> request NGRID ".*"      idd.unidata.ucar.edu
> request NOGAPS ".*"     usgodae3.fnmoc.navy.mil
> request NEXRAD|FNEXRAD ".*" unidata2.ssec.wisc.edu
> request MCIDAS ".*" unidata2.ssec.wisc.edu
> request NLDN ".*" striker2.atmos.albany.edu
> request NEXRD2 "L2-BZIP2/(KCCX|KPBZ|KDIX|KBGM|KLWX|KBUF|KCLE)" 
> idd.cise-nsf.gov  PRIMARY
> request GEM ".*" ldm.meteo.ec.gc.ca
> request NIMAGE ".*" idd.unidata.ucar.edu
>
> Plus, the feed works with near zero latencies when running from cise-nsf,
> as long as their microwave link doesn't go down or saturate, which seems
> to happen frequently.  BTW, when their link saturates (or whatever's
> happening), it seems things don't just slow down, but data is actually
> lost as we end up with a lot of missing data.

I'll have to look at the bandwidth delay product calculations again to refresh
my memory what the theoretical throughput should be.  In the meantime, the more
important objective is to get you timely data.  Please replace your CONDUIT
line with following and restart your LDM;

request NMC2 "ST.opnl.*[09]$"        idd.unidata.ucar.edu
request NMC2 "ST.opnl.*[18]$"        idd.unidata.ucar.edu
request NMC2 "ST.opnl.*[27]$"        idd.unidata.ucar.edu
request NMC2 "ST.opnl.*[36]$"        idd.unidata.ucar.edu
request NMC2 "ST.opnl.*[45]$"        idd.unidata.ucar.edu

we frequently use this five way split to resolve numerous problems including
high latency connections, packet shaping, ...

With regard to the NSF wide area connections, the laser link was due to 
be swapped out with fiber over a month ago.  Qwest is dragging their feet 
for reasons unknown (to me).  Of course, the upgraded laser they installed 
isn't really an upgrade and fails much more frequently than the old one,
causing the problems you've noted above.  We keep hanging in there because 
the end result will be very good, it's just taking longer to get there than 
we expected.

> I'm not convinced it's a tuning issue, although we shouldn't ignore that.
> It still strikes me as a network constriction somewhere for whatever
> reason.  As I mentioned above, our latencies to cise-nsf are very low when
> it's working well, but our latencies to idd.unidata are currently running
> +/- 1500 seconds peak for CONDUIT.  Given the bandwidths which should be
> availble on NLR (or even I2), I think we should be seeing near zero
> latencies similar to cise-nsf.

A ping from atm shows a 15-20 ms latency to ldm.meteo whereas idd shows a
much more variable latency of 50-130 m, which seems to come and go in spurts.
I'll look at the BDP and get back to you with more informaton

mike



Ticket Details
===================
Ticket ID: QRG-550150
Department: Support IDD
Priority: Normal
Status: Closed