Re: [conduit] Large CONDUIT latencies to UW-Madison starting the last day or two.

  • To: "'Arthur A Person'" <aap1@xxxxxxx>, "'Pete Pokrandt'" <poker@xxxxxxxxxxxx>
  • Subject: Re: [conduit] Large CONDUIT latencies to UW-Madison starting the last day or two.
  • From: "Patrick L. Francis" <wxprofessor@xxxxxxxxx>
  • Date: Fri, 19 Feb 2016 14:28:46 -0500

Art / Pete etc. al. :)


There seems to be a consistency is potential packet loss from no matter which 
route is taken into ncep… so whoever you are communicating with, you might have 
them investigate… reference the previous graphic shown and this 
new one here:


if you are unfamiliar with amazon ec2 routing, the first.. twenty something or 
so hops are just internal to amazon, and they don’t jump outside until you hit 
the internet2 hops, which then jump to gigapop, and from there to noaa 
internal.. so since this amazon box is in ashburn, physically it’s close, and 
has limited interruptions until that point.. 


the same hop causes more severe problems from my colo boxes, which are 
hurricane electric direct, which means that in those cases jumping from 
hurricane electric to has “severe” problems (including packet 
loss) while jumping from amazon to I2 to gigapop to also 
encounters issues, but not as severe..


hopefully this may help :)  Happy Friday :)







Patrick L. Francis

Vice President of Research & Development


Aeris Weather









From: conduit-bounces@xxxxxxxxxxxxxxxx 
[mailto:conduit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Arthur A Person
Sent: Friday, February 19, 2016 1:57 PM
To: Pete Pokrandt <poker@xxxxxxxxxxxx>
Cc: Bentley, Alicia M <ambentley@xxxxxxxxxx>; Michael Schmidt 
<mschmidt@xxxxxxxx>; support-conduit@xxxxxxxxxxxxxxxx 
<conduit@xxxxxxxxxxxxxxxx>; _NCEP.List.pmb-dataflow 
<ncep.list.pmb-dataflow@xxxxxxxx>; Daes Support <daessupport@xxxxxxxxxx>
Subject: Re: [conduit] Large CONDUIT latencies to UW-Madison 
starting the last day or two.




We've been struggling with latencies for months to the point where I've been 
feeding gfs 0p25 from NCEP and the rest from Unidata... that is, up untl Feb 
10th.  The afternoon of the 10th, our latencies to NCEP dropped to what I 
consider "normal", an average maximum latency of about 30 seconds.  Our 
networking folks and NCEP have been trying to identify what this problem was, 
but as far as I know, no problem has been identified or action taken.  So, it 
appears it's all buried in the mysteries of the internet.  I've switched data 
collection back to NCEP at this point, but I'm on the edge of my seat waiting 
to see if it reverts back to the old behavior...





From: "Pete Pokrandt" <poker@xxxxxxxxxxxx <mailto:poker@xxxxxxxxxxxx> >
To: "Carissa Klemmer - NOAA Federal" <carissa.l.klemmer@xxxxxxxx 
<mailto:carissa.l.klemmer@xxxxxxxx> >, "Arthur A Person" <aap1@xxxxxxx 
<mailto:aap1@xxxxxxx> >, "_NCEP.List.pmb-dataflow" 
<ncep.list.pmb-dataflow@xxxxxxxx <mailto:ncep.list.pmb-dataflow@xxxxxxxx> >
Cc: "support-conduit@xxxxxxxxxxxxxxxx <mailto:support-conduit@xxxxxxxxxxxxxxxx> 
" <conduit@xxxxxxxxxxxxxxxx <mailto:conduit@xxxxxxxxxxxxxxxx> >, "Michael 
Schmidt" <mschmidt@xxxxxxxx <mailto:mschmidt@xxxxxxxx> >, "Bentley, Alicia M" 
<ambentley@xxxxxxxxxx <mailto:ambentley@xxxxxxxxxx> >, "Daes Support" 
<daessupport@xxxxxxxxxx <mailto:daessupport@xxxxxxxxxx> >
Sent: Friday, February 19, 2016 12:20:20 PM
Subject: Large CONDUIT latencies to UW-Madison starting the 
last day or two.



Not sure if this is on my end or somewhere upstream, but the last several runs 
my CONDUIT latencies have been getting huge to the point where we are losing 


I did stop my ldm the other day to add in an alternate feed for Gilbert at, not sure if that pushed me over a bandwidth limit, or by 
reconnecting we got hooked up to a different remote ldm, or taking a different 
path, that shot the latencies up.


Seems to be really only CONDUIT, none of our other feeds show this kind of 


Still looking into things locally, but wanted make people aware. I just 
rebooted, will see if that helps at all.


Here's an ldmping and traceroute from to


[ldm@idd ~]$ ldmping

Feb 19 17:16:08 INFO:      State    Elapsed Port   Remote_Host           

Feb 19 17:16:08 INFO: Resolving to took 
0.00486 seconds

Feb 19 17:16:08 INFO: RESPONDING   0.115499  388 



traceroute to (, 30 hops max, 60 byte 

 1 (  0.760 ms  
0.954 ms  0.991 ms

 2 (  18.119 ms  18.123 ms 
 18.107 ms

 3 (  27.836 ms  27.852 
ms  27.838 ms

 4 (  37.363 ms  37.363 
ms  37.345 ms

 5 (  38.051 ms  38.254 ms  
38.401 ms

 6 (  118.042 ms  118.412 ms  118.529 ms

 7 (  41.764 ms  40.343 ms  40.500 ms

 8  * * *

 9  * * *

10  * * *


Similarly to ncepldm 


[ldm@idd ~]$ ldmping

Feb 19 17:18:40 INFO:      State    Elapsed Port   Remote_Host           

Feb 19 17:18:40 INFO: Resolving to took 
0.001599 seconds

Feb 19 17:18:40 INFO: RESPONDING   0.088901  388  



[ldm@idd ~]$ traceroute

traceroute to (, 30 hops max, 60 byte 

 1 (  0.730 ms  
0.831 ms  0.876 ms

 2 (  18.092 ms  18.092 ms 
 18.080 ms

 3 (  40.196 ms  40.226 ms  40.256 ms

 4 (  40.970 ms  41.012 ms  40.996 ms

 5 (  42.780 ms  42.778 ms  
42.764 ms

 6 (  40.869 ms  40.922 ms  40.946 ms

 7  * * *

 8  * * *








Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx <mailto:poker@xxxxxxxxxxxx> 




Arthur A. Person
Research Assistant, System Administrator
Penn State Department of Meteorology
email:  aap1@xxxxxxx <mailto:aap1@xxxxxxx> , phone:  814-863-1563

  • 2016 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the conduit archives: