RE: NNEXRAD Latencies?

Jeff Weber:
> Hello Ldm'ers,
>
> It does appear that both frost and ingestor at wunderground are on time.
>
> The delay seems to be occurring at flood.atmos.uiuc.edu
>
> Doing a notifyme to flood indicates that it is being overwhelmed by the
> NMC2 (CONDUIT) feed, causing delay for other products.
>
> Those of you feeding NNEXRAD  from flood, or feeding other products and
> experiencing latencies, please let me know and we will attempt other
> sites if you desire.
>
> We are working on this issue and will keep the list aware of any changes.
>
> Thank you,


I'd suggest that perhaps it is a network congestion problem between here and
wunderground as we have been feeding both F/NEXRAD and NMC2 through flood at
the same time for many months and this issue has not been a problem
previously.

Here's the traceroute from our end.

flood 517: /usr/sbin/traceroute frost.wunderground.com
traceroute to frost.wunderground.com (216.34.4.97), 30 hops max, 38 byte
packets
 1  uiuc-atmsci-net.gw.uiuc.edu (128.174.80.8)  0.494 ms  0.452 ms  0.587 ms
 2  t-core1-1.gw.uiuc.edu (128.174.1.170)  1.001 ms  0.918 ms  1.936 ms
 3  t-node1-1.gw.uiuc.edu (128.174.1.130)  0.638 ms  0.397 ms  0.369 ms
 4  dmz.gw.uiuc.edu (128.174.0.193)  1.769 ms  1.049 ms  1.176 ms
 5  aads.exodus.net (206.220.243.63)  4.958 ms  4.871 ms  5.394 ms
 6  bbr02-g1-0.okbr01.exodus.net (216.34.183.66)  5.472 ms  5.252 ms  5.529
ms
 7  bbr01-p0-0.snva03.exodus.net (206.79.9.85)  55.491 ms  55.338 ms  54.911
ms
 8  bbr02-p5-0.sntc08.exodus.net (216.32.173.6)  54.975 ms  60.074 ms
55.278 ms
 9  bbr01-p8-0.sntc04.exodus.net (206.79.9.186)  55.012 ms  55.239 ms
55.646 ms
10  dcr01-g2-1.sntc04.exodus.net (216.34.2.17)  55.291 ms  55.125 ms  55.256
ms
11  csr01-ve240.sntc04.exodus.net (216.34.2.218)  56.104 ms  55.685 ms
55.380 ms
12  frost.wunderground.com (216.34.4.97)  55.464 ms  55.979 ms  55.504 ms

Not horrible, but, ping/traceroute is not really an accurate measurement of
actual bandwidth available, just the turnaround time for an individual
packet.  There is a big increase in delay between hops 6 and 7 which
suggests heavy congestion at that point.  This would seem to confirm what
Tom McDermott suggests...

Tom McDermott:
> So it looks like
> the problem may lie with the wundergound.com host.  Based on past
> experience, I don't think their network bandwith is all that it could be.
> Either that, or path to wunderground is really congested.

Isn't Exodus up for Chapter 11?

Steve's message concerning this is below.  Apparently others are
experiencing delays from wunderground as well.  There doesn't seem to be a
similar chart for F/NNEXRAD as there is for the other services i.e.
http://www.unidata.ucar.edu/projects/idd/status/idd/fosTopo.html



-----------------------

Steve Chizwell:

David,

presumably you are seeing the radar mosaic at this time,
albiet it looks like you are running 1 hour behind in the NEXRAD and
FNEXRAD feeds you are getting from frost.wunderground.com.
The Conduit feed looks good.

LDMBINSTATS
ldm-5.1.3 flood.atmos.uiuc.edu 2001112921 NMC2 2961 244680342 +
20011129213259 tgsv32 123.97 262@3018
ldm-5.1.3 flood.atmos.uiuc.edu 2001112920 NMC2 15446 354611525 +
20011129204057 tgsv32 329.49 531@2120
ldm-5.1.3 flood.atmos.uiuc.edu 2001112920 FNEXRAD 7 678320 +
20011129201612 motherlode.ucar.edu 3526.60 3643@1211
ldm-5.1.3 flood.atmos.uiuc.edu 2001112920 FNEXRAD 182 1242446 +
20011129203151 noaaport.unidata.ucar.edu 3570.05 3659@1929
ldm-5.1.3 flood.atmos.uiuc.edu 2001112920 NNEXRAD 8481 50246353 +
20011129203155 ingestor.wunderground.com 3562.84 3660@3044
LDMEND

I checked the other sites getting FNEXRAD via frost.wunderground.com
(eg U. Oklahoma), and the same problem exists...and they aren't getting
Conduit. So, it may be that the NEXRAD|FNEXRAD feed out of wunderground.com
is slowing things down.

I'll pass this on to Jeff Weber and see what he thinks about the
NEXRAD topology.

Steve Chiswell
Unidata User SUpport




  • 2001 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the ldm-users archives: