Re: [conduit] [Ncep.list.pmb-dataflow] Large CONDUIT lags starting with 18 UTC July 1 2019 cycle

  • To: "Tyle, Kevin R" <ktyle@xxxxxxxxxx>, Anne Myckow - NOAA Affiliate <anne.myckow@xxxxxxxx>
  • Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large CONDUIT lags starting with 18 UTC July 1 2019 cycle
  • From: Pete Pokrandt <poker@xxxxxxxxxxxx>
  • Date: Thu, 18 Jul 2019 18:50:19 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=aos.wisc.edu;dmarc=pass action=none header.from=aos.wisc.edu;dkim=pass header.d=aos.wisc.edu;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uHWxqguM2AEXtsL58EJDH6OQzCfwfApxv5Pp55z+vwI=; b=IFEVgSUs5P39eWT7DyQVZkWazLqXhkdQ+DCDUbR2cZZkjQTBpTs7bcnKH+ZH3kO9n9p8qDM3UIq2mooQIuTjYOnGri87kGhfWnC+fx5PwXCoQlBQZ5t18E2J2w+bDZjT0orTospfx3tpTdnPdJTvdX8odjDAzeUW1zMr5hYq+clVuns0WuQRMqF6vy8mEAI0XUmG902zF9PerG6II7LkeVU7H6A9mep7vXAN+TWMmNchfVv+iONdH1YrU6MAeGVGEshhvmRW5J2U5P9DQEyof1+UqWN6v1DPsrkj+C4O7iSMVdVxxRFB8TR+52m0j+y5pMUOjZupqpT0bWzDyh0nMg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=F7BPsovHs3LGbjMGbBJvGWyNToRQaas+tANSRwaPmIYje4UpjppXL14GaBVdMWDBICJYvltBt5dtnoMIvTqev7ddn/nkHeVZVO4GpJZKBnyviyWZHKE7nFK3X6oEUVUQZDptjEFRsEtafrtKlKi+GMHQjXkaAq1LFoa01ZrfPRvbINwnxlFcwR2uftSxcfsAxsw4+SMCac9XBhyun85OkYU4v+ngqntqqpUmq9eKCifhf0oCe7HA641CeOZYJYgmqxmhwuHseFspkl2Qo2/8IrI+8KFi/WmDDw7XmjXopR7xCzCB7KnB3wX+XM/TOf0jqG+ifdPIXpACW5T0o24YKA==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=poker@xxxxxxxxxxxx;
One more update. Shortly after I sent this email CONDUIT data started flowing 
again. 15 UTC SREF and some NDFD products coming in now. Lags seem to be 
dropping quickly to acceptable values.

Pete



<http://www.weather.com/tv/shows/wx-geeks/video/the-incredible-shrinking-cold-pool>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx

________________________________
From: conduit <conduit-bounces@xxxxxxxxxxxxxxxx> on behalf of Pete Pokrandt 
<poker@xxxxxxxxxxxx>
Sent: Thursday, July 18, 2019 1:28 PM
To: Tyle, Kevin R; Anne Myckow - NOAA Affiliate
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx; _NCEP.List.pmb-dataflow; Mike 
Zuranski; Dustin Sheffler - NOAA Federal; support-conduit@xxxxxxxxxxxxxxxx
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large CONDUIT lags starting 
with 18 UTC July 1 2019 cycle

Our CONDUIT feed hasn't seen any data at all for the last hour or so. The last 
product that came in was from the 252h forecast of the GFS. At that time the 
lag was up around 3600s.

20190718T173422.935745Z pqutil[18536]               pqutil.c:display_watch:1189 
        INFO       24479 20190718163423.193677 CONDUIT 531  
data/nccf/com/gfs/prod/gfs.20190718/12/gfs.t12z.pgrb2.1p00.f252 
!grib2/ncep/GFS/#000/201907181200F252/TMPK/100 m HGHT! 000531

FYI.
Pete


<http://www.weather.com/tv/shows/wx-geeks/video/the-incredible-shrinking-cold-pool>--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx

________________________________
From: Tyle, Kevin R <ktyle@xxxxxxxxxx>
Sent: Thursday, July 18, 2019 1:24 PM
To: Pete Pokrandt; Anne Myckow - NOAA Affiliate
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx; _NCEP.List.pmb-dataflow; Mike 
Zuranski; Dustin Sheffler - NOAA Federal; support-conduit@xxxxxxxxxxxxxxxx
Subject: RE: [conduit] [Ncep.list.pmb-dataflow] Large CONDUIT lags starting 
with 18 UTC July 1 2019 cycle


I can confirm that our 12Z GFS receipt via CONDUIT was affected by the lags; we 
ended up with a lot of missing grids beginning with forecast hour 120. Since we 
feed from Pete at UWisc-MSN, not too surprising there.



First time we’ve missed grids in a few weeks.



--Kevin



_____________________________________________
Kevin Tyle, M.S., Manager of Departmental Computing
Dept. of Atmospheric & Environmental Sciences
University at Albany
Earth Science 228, 1400 Washington Avenue
Albany, NY 12222
Email: ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>
Phone: 518-442-4578
_____________________________________________



From: conduit [mailto:conduit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Pete 
Pokrandt
Sent: Thursday, July 18, 2019 12:29 PM
To: Anne Myckow - NOAA Affiliate <anne.myckow@xxxxxxxx>
Cc: Kevin Goebbert <Kevin.Goebbert@xxxxxxxxx>; conduit@xxxxxxxxxxxxxxxx; 
_NCEP.List.pmb-dataflow <ncep.list.pmb-dataflow@xxxxxxxx>; Mike Zuranski 
<zuranski@xxxxxxxxxxxxxxx>; Dustin Sheffler - NOAA Federal 
<dustin.sheffler@xxxxxxxx>; support-conduit@xxxxxxxxxxxxxxxx
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large CONDUIT lags starting 
with 18 UTC July 1 2019 cycle



Just an update -



CONDUIT latencies decreased starting with the 12 UTC run yesterday, 
corresponding to the move of many other services to Boulder.



However, the 12 UTC run today (7/18) is showing much larger CONDUIT latencies 
(1800s at present)



[cid:image001.gif@01D53D74.8190F880]





http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu





Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Anne Myckow - NOAA Affiliate 
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>>
Sent: Tuesday, July 9, 2019 8:00 AM
To: Pete Pokrandt
Cc: Dustin Sheffler - NOAA Federal; Kevin Goebbert; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; Mike Zuranski; Gilbert Sebenste; Person, Arthur A.; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] Large CONDUIT lags starting with 18 UTC 
July 1 2019 cycle



Hi everyone,



During our troubleshooting yesterday we found that there is another network 
issue that I believe is causing the issue with CONDUIT when most things are 
hosted out of College Park. Our networking team is pushing to have it remedied 
this week and I'm hopeful it will fix the CONDUIT latency permanently. If it 
does not we will re-engage our networking group to look into it actively again.



Thanks for your patience with this, more to come.



Anne



On Mon, Jul 8, 2019 at 11:02 PM Pete Pokrandt 
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:

Thanks, Dustin. It was definitely better for the 18 UTC suite of runs.



Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Dustin Sheffler - NOAA Federal 
<dustin.sheffler@xxxxxxxx<mailto:dustin.sheffler@xxxxxxxx>>
Sent: Monday, July 8, 2019 12:16 PM
To: Pete Pokrandt
Cc: Anne Myckow - NOAA Affiliate; Kevin Goebbert; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; Mike Zuranski; Gilbert Sebenste; Person, Arthur A.; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] Large CONDUIT lags starting with 18 UTC 
July 1 2019 cycle



Hi Pete,



We had to shift NOMADS temporarily to our College Park data center for work in 
our Boulder data center and while doing so our network team was collecting data 
to remedy the slowness issue we've seen with HTTPS, FTP, and LDM when all our 
applications are in the C.P. data center. You should start seeing relief as 
we've switched NOMADS back to the other data center now.



-Dustin



On Mon, Jul 8, 2019 at 4:55 PM 'Pete Pokrandt' via _NCEP list.pmb-dataflow 
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>> wrote:

FYI latencies are much larger with today's 12 UTC model suite. They had been 
peaking around 500-800s for the past week. Today they are up over 2000 with the 
12 UTC suite.



http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu



Pete



--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Anne Myckow - NOAA Affiliate 
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>>
Sent: Wednesday, July 3, 2019 8:17 AM
To: Pete Pokrandt
Cc: Derek VanPelt - NOAA Affiliate; Kevin Goebbert; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; Mike Zuranski; Gilbert Sebenste; Person, Arthur A.; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] Large CONDUIT lags starting with 18 UTC 
July 1 2019 cycle



Pete et al,



Can you tell us how the latency looks this morning and overnight?



Thanks,

Anne



On Tue, Jul 2, 2019 at 9:23 PM Anne Myckow - NOAA Affiliate 
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>> wrote:

Hi Pete,



We've been able to re-create the CONDUIT LDM issues with other LDMs now in NCO. 
We do not know root cause but we are failing some services out of College Park 
now to alleviate the traffic. You may experience slowness again tomorrow while 
we troubleshoot with the whole team in office but overnight (Eastern Time 
anyway) should be better.



I'm adding you and the other people with actual email addresses (rather than 
the lists) to the email chain where we are keeping everyone apprised, so don't 
be surprised to get another email that says OPEN: TID <lots of other text> in 
the subject line - that's about this slowness.



Thanks,

Anne



On Tue, Jul 2, 2019 at 11:49 AM Pete Pokrandt 
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:

Thanks, Anne.



Lag is still there on the current 12 UTC cycle FYI



http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu



Pete

Sent from my iPhone

On Jul 2, 2019, at 10:18 AM, Anne Myckow - NOAA Affiliate 
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>> wrote:

Hi Pete,



We (NCO) have fully loaded our College Park site again, where conduit lives. 
I'll see if I can get the attention of our networking folks today about this 
since they just installed new hardware that we believe should have increased 
our network capacity.



Thanks,

Anne



On Tue, Jul 2, 2019 at 1:25 AM 'Pete Pokrandt' via _NCEP list.pmb-dataflow 
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>> wrote:

All,

Something happened in the past day or two that has resulted in large lags (and 
data loss) between conduit.ncep.noaa.gov<http://conduit.ncep.noaa.gov> and 
idd.aos.wisc.edu<http://idd.aos.wisc.edu> (and Unidata too)



Based on these IDD stats, there was a bit of a lag increase with the 06 UTC 
July 1 runs, a little larger with the 12 UTC runs, and then much bigger for the 
18 UTC July 1 and 00 UTC July 2 runs. Any idea what might have happened or 
changed? The fact that Unidata's and UW-AOS's graphs look so similar suggests 
that it's something upstream of us.



http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu



<iddstats_conduit_idd_aos_wisc_edu_20190702.gif>



Here's Unidata's graph:



http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+lead.unidata.ucar.edu



<iddstats_conduit_lead_unidata_ucar_edu_20190702.gif>



Thanks,

Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Derek VanPelt - NOAA Affiliate 
<derek.vanpelt@xxxxxxxx<mailto:derek.vanpelt@xxxxxxxx>>
Sent: Tuesday, April 23, 2019 3:40 PM
To: Pete Pokrandt
Cc: Person, Arthur A.; Gilbert Sebenste; Kevin Goebbert; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; Mike Zuranski; Dustin Sheffler - NOAA Federal; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



Hi All,



There are few things going on here.



The strongest driver on you download speeds is the presence or absence of 
NOMADS in College Park.   When NOMADS is in CPRK, dissemination from the entire 
datacenter (including our Conduit servers which only exist in College Park) can 
be effected at peak model download times.  Adding to this are new rules put in 
place that require the NOMADS users to all follow the top level VIP.  
Previously some of our users would pull from Boulder even when the VIP pointed 
to College Park.  That is no longer regularly possible, as the backup server is 
intentionally being blocked to traffic.



I have been asked to go back and using internal metrics and the download speeds 
that have been provided in this thread (thanks!) to firmly establish the time 
line, and hope to do so in the next few days, but believe the answer will be as 
stated above.



As far as splitting the request into many smaller requests; it clearly is 
having a positive effect.   As long as you don't (and we don't) hit an upper 
connection count limit, this appears to be the best way to minimize the latency 
during peak download times.



More to come.  Thanks for keeping this discussion alive as it has provided 
light for both the Conduit download speeds, but also provides context for some 
of our wide ranging issues.



Thank you,



Derek



On Tue, Apr 23, 2019 at 3:07 PM Pete Pokrandt 
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:

I'm still on the 10 way split that I've been on for quite some time, and 
without my changing anything, our lags got much much better starting on Friday, 
4/19 starting with the 12 UTC model sequence. I don't know if this correlated 
to Unidata switching to a 20 way split or not, but that happened around the 
same time.



Here are my lag plots, the first ends 04 UTC 4/20, and the second just now at 
19 UTC 4/23. Note the Y axis on the first plot goes to ~3600 seconds, but on 
the second plot, only to ~100 seconds.





<iddstats_CONDUIT_idd_aos_wisc_edu_ending_20190423_1900UTC.gif>



Pete







--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Tuesday, April 23, 2019 1:49 PM
To: Pete Pokrandt; Gilbert Sebenste
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; Mike Zuranski; Derek VanPelt - NOAA Affiliate; Dustin 
Sheffler - NOAA Federal; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



I switched our test system iddrs2a feeding from 
conduit.ncep.noaa.gov<http://conduit.ncep.noaa.gov> back to a 2-way split (from 
a 20-way split) yesterday to see how it would hold up:



<pastedImage.png>



While not as good as prior to February, it wasn't terrible, at least until this 
morning.  Looks like the 20-way split may be the solution going forward if this 
is the "new normal" for network performance.



                        Art





Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email:  aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:  
814-863-1563<callto:814-863-1563>



________________________________

From: Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Saturday, April 20, 2019 12:29 AM
To: Person, Arthur A.; Gilbert Sebenste
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; Mike Zuranski; Derek VanPelt - NOAA Affiliate; Dustin 
Sheffler - NOAA Federal; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



Well, I haven't changed anything in the past few days, but my lags dropped back 
to pretty much pre-February 10 levels starting with today's (20190419) 12 UTC 
run. I know Unidata switched to a 20 way split feed around that same time... I 
am still running a 10-way split. I didn't change anything between today's 06 
UTC run and the 12 UTC run, but the lags dropped considerably, and look like 
they used to.

I wonder if some bad piece of hardware got swapped out somewhere, or if some 
change was made internally at NCEP that fixed whatever was going on. Or, 
perhaps the Unidata switch to a 20 way feed somehow reduced a load on a router 
somewhere and data is getting through more easily?



Strange..



Pete



<conduit_lag_idd.aos.wisc.edu_20180420_0409UTC.gif>



--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Thursday, April 18, 2019 2:20 PM
To: Gilbert Sebenste; Pete Pokrandt
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; Mike Zuranski; Derek VanPelt - NOAA Affiliate; Dustin 
Sheffler - NOAA Federal; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



All --



I switched our test system, iddrs2a, feeding from 
conduit.ncep.noaa.gov<http://conduit.ncep.noaa.gov> from a 2-way split to a 
20-way split yesterday, and the results are dramatic:



<pastedImage.png>

Although conduit feed performance at other sites improved a little last night 
with the MRMS feed failure, it doesn't explain this improvement entirely.  This 
leads me to ponder the causes of such an improvement:



1) The network path does not appear to be bandwidth constrained, otherwise 
there would be no improvement no matter how many pipes were used;



2) The problem, therefore, would appear to be packet oriented, either with path 
packet saturation, or packet shaping.



I'm not a networking expert, so maybe I'm missing another possibility here, but 
I'm curious whether packet shaping could account for some of the throughput 
issues.  I've also been having trouble getting timely delivery of our Unidata 
IDD satellite feed, and discovered that switching that to a 10-way split feed 
(from a 2-way split) has reduced the latencies from 2000-3000 seconds down to 
less than 300 seconds.  Interestingly, the peak satellite feed latencies (see 
below) occur at the same time as the peak conduit latencies, but this path is 
unrelated to NCEP (as far as I know).  Is it possible that Internet 2 could be 
packet-shaping their traffic and that this could be part of the cause for the 
packet latencies we're seeing?



                             Art



<pastedImage.png>





Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email:  aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:  
814-863-1563<callto:814-863-1563>



________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Gilbert Sebenste 
<gilbert@xxxxxxxxxxxxxxxx<mailto:gilbert@xxxxxxxxxxxxxxxx>>
Sent: Thursday, April 18, 2019 2:29 AM
To: Pete Pokrandt
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; Mike Zuranski; Derek VanPelt - NOAA Affiliate; Dustin 
Sheffler - NOAA Federal; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



FYI: all evening and into the overnight, MRMS data has been missing, QC BR has 
been town for the last 40 minutes, but smaller products are coming through 
somewhat more reliably as of 6Z. CONDUIT was still substantially delayed around 
4Z with the GFS.



Gilbert

On Apr 16, 2019, at 5:43 PM, Pete Pokrandt 
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:

Here's a few traceroutes from just now - from 
idd-agg.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fidd-agg.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183763327&sdata=CeCOwjpISQj6fcVb6nVg7ZRL6do3JzWUJf5kyAo4FoA%3D&reserved=0>
 to 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183773336&sdata=rKHsM%2F7tMSPMn6t%2BJaOAvRnlOQ96d44YujIgs9oy%2ByE%3D&reserved=0>.
 The lags are up and running around 600-800 seconds right now. I'm not 
including all of the * * * lines from after 144.90.76.65 which is presumably 
behind a firewall.





2209 UTC Tuesday Apr 16



traceroute -p 388 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183773336&sdata=rKHsM%2F7tMSPMn6t%2BJaOAvRnlOQ96d44YujIgs9oy%2ByE%3D&reserved=0>

traceroute to 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183783341&sdata=Zwh8YOU9A2Ypqz3CkqRYzm1OmLgMrsSqAyc%2F59CBDd8%3D&reserved=0>
 (140.90.101.42), 30 hops max, 60 byte packets

 1  
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183783341&sdata=C9uDcgwL6TA%2BowqpwKYgetQgBd3srYipKWHhVfR00iM%3D&reserved=0>
 (144.92.130.1)  0.906 ms  0.701 ms  0.981 ms

 2  128.104.4.129 (128.104.4.129)  1.700 ms  1.737 ms  1.772 ms

 3  
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183793346&sdata=lf9RXr81WlwY6VGfl%2BRcqCvNdsUn0SNawsZXmsj6hZ8%3D&reserved=0>
 (146.151.168.4)  1.740 ms  3.343 ms  3.336 ms

 4  
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183793346&sdata=PZ2rfBzrl%2BON3E83H1gp5YDhxLBdNblKz4ZKbz%2Fvz0Y%3D&reserved=0>
 (146.151.166.122)  2.043 ms  2.034 ms  1.796 ms

 5  144.92.254.229 (144.92.254.229)  11.530 ms  11.472 ms  11.535 ms

 6  
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183803351&sdata=Ms6Hjqw0WWVN0ocnR3zegKQLNlh6I2isEk%2BqKnyXrrk%3D&reserved=0>
 (162.252.70.60)  22.813 ms  22.899 ms  22.886 ms

 7  
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183813369&sdata=7Y1B32WmsjLARZCgsFP4HYI5E7y%2FyexsT85rAU%2FjVFI%3D&reserved=0>
 (206.196.177.2)  24.248 ms  24.195 ms  24.172 ms

 8  
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183813369&sdata=8bPiPzxQMtXNNCyqKXFN665GM40VEGJVn0D%2FbLbXwdY%3D&reserved=0>
 (206.196.177.189)  24.244 ms  24.196 ms  24.183 ms

 9  
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183823370&sdata=WcBq8pO%2FJljwLEqQylb2qdtpxyN2vYg1K%2BXqYkx8eUU%3D&reserved=0>
 (137.75.68.4)  24.937 ms  24.884 ms  24.878 ms

10  140.208.63.30 (140.208.63.30)  134.030 ms  126.195 ms  126.305 ms

11  140.90.76.65 (140.90.76.65)  106.810 ms  104.553 ms  104.603 ms



2230 UTC Tuesday Apr 16



traceroute -p 388 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183823370&sdata=CoM0FaNP%2FuDKjO6JBGQpEud%2BRtnxceBLEecRHDPbC9M%3D&reserved=0>

traceroute to 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183833383&sdata=RS5AlVTJicADumMPrpSmGyDpIzE57hMcvQkAQRVburE%3D&reserved=0>
 (140.90.101.42), 30 hops max, 60 byte packets

 1  
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183833383&sdata=5lVnY5TsmOs9xWXFYNFPH47crEWG43B1bVOrYaiMpsM%3D&reserved=0>
 (144.92.130.1)  1.391 ms  1.154 ms  5.902 ms

 2  128.104.4.129 (128.104.4.129)  6.917 ms  6.895 ms  2.004 ms

 3  
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183843384&sdata=vroE4Va9EQc1vHpWS7ubXp3utL0l%2BgPQw40hvgXp1RY%3D&reserved=0>
 (146.151.168.4)  3.158 ms  3.293 ms  3.251 ms

 4  
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183853388&sdata=7%2BkHeym9r7y%2B28WniG%2BH3UczkDwA6JKP0G5vlYsiapU%3D&reserved=0>
 (146.151.166.122)  6.185 ms  2.278 ms  2.425 ms

 5  144.92.254.229 (144.92.254.229)  6.909 ms  13.255 ms  6.863 ms

 6  
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183853388&sdata=sJG%2FzeQDHIv6l83%2B3r1e4Uhdp23v%2Fyj2Sme3M5fN%2BEM%3D&reserved=0>
 (162.252.70.60)  23.328 ms  23.244 ms  28.845 ms

 7  
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183863398&sdata=WBMl%2BeN7w4CjOq1wb5zO8H3Lq9gUHXpS0dCa1fiDVbM%3D&reserved=0>
 (206.196.177.2)  30.308 ms  24.575 ms  24.536 ms

 8  
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183863398&sdata=hvcp%2FuDataPPsVACP3DZdwEQbIWUaDHcHo0osAqY3OM%3D&reserved=0>
 (206.196.177.189)  29.594 ms  24.624 ms  24.618 ms

 9  
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183873407&sdata=jdqgZC6NkjrTVjxMMbv%2F1zjewcnX1CIEMw7%2FWwSdiM0%3D&reserved=0>
 (137.75.68.4)  24.581 ms  30.164 ms  24.627 ms

10  140.208.63.30 (140.208.63.30)  25.677 ms  25.767 ms  29.543 ms

11  140.90.76.65 (140.90.76.65)  105.812 ms  105.345 ms  108.857



2232 UTC Tuesday Apr 16



traceroute -p 388 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183873407&sdata=lLNcCT1IYT03KjV3OdC1IEUGO10ohyF37vklj2OjVg8%3D&reserved=0>

traceroute to 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183883416&sdata=D7nOJMqsLukCExnrqgeTBwr9fFVw5Rt4gbHY2n5l9TU%3D&reserved=0>
 (140.90.101.42), 30 hops max, 60 byte packets

 1  
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183893417&sdata=IOXcr%2B0D%2BlFEcplQFp%2BZ0fk3Zl9fJPZU91D9L%2FP7YwU%3D&reserved=0>
 (144.92.130.1)  1.266 ms  1.070 ms  1.226 ms

 2  128.104.4.129 (128.104.4.129)  1.915 ms  2.652 ms  2.775 ms

 3  
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183893417&sdata=98o7WTiLu9u91Eobf%2FlDv45cmBM910yAAsRoFUtoyHE%3D&reserved=0>
 (146.151.168.4)  2.353 ms  2.129 ms  2.314 ms

 4  
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183903426&sdata=IUmBErjyRqLnbNXcExrbMrFibdZwJ158gYwcyN%2By2zs%3D&reserved=0>
 (146.151.166.122)  2.114 ms  2.111 ms  2.163 ms

 5  144.92.254.229 (144.92.254.229)  6.891 ms  6.838 ms  6.840 ms

 6  
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183903426&sdata=VwZ7k2kOZ5wZmYIxmCdYYysYNNqCGhFgxDcDyLwDRu8%3D&reserved=0>
 (162.252.70.60)  23.336 ms  23.283 ms  23.364 ms

 7  
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183913431&sdata=JUk%2FjrrxqnsnPZAM5oopX3qgkmLhKrw5u0hr3LKUr48%3D&reserved=0>
 (206.196.177.2)  24.493 ms  24.136 ms  24.152 ms

 8  
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183913431&sdata=8vWa7yBPu%2BwghrSdd6j%2BqkLFLxw6JSJ7xcXTLtogRTk%3D&reserved=0>
 (206.196.177.189)  24.161 ms  24.173 ms  24.176 ms

 9  
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183923436&sdata=rFdurPR3Sg1fD8njyTJI92JZ36wnodHgl3hPj5sfcZ4%3D&reserved=0>
 (137.75.68.4)  24.165 ms  24.331 ms  24.201 ms

10  140.208.63.30 (140.208.63.30)  25.361 ms  25.427 ms  25.240 ms

11  140.90.76.65 (140.90.76.65)  113.194 ms  115.553 ms  115.543 ms





2234 UTC Tuesday Apr 16



traceroute -p 388 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183923436&sdata=28YMwCIX6zGZO%2FXSAHFGigEC9tFWqUMnO%2FdO5mscCeI%3D&reserved=0>

traceroute to 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183933449&sdata=chBb4hkOW%2FEZmocLMSmYCPmsUM0z3ofY6Rd5bEs8UJU%3D&reserved=0>
 (140.90.101.42), 30 hops max, 60 byte packets

 1  
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183943459&sdata=lYWRpRdb909p7VNUzusLaX2uza7xPspXdWd5FZUFPdE%3D&reserved=0>
 (144.92.130.1)  0.901 ms  0.663 ms  0.826 ms

 2  128.104.4.129 (128.104.4.129)  1.645 ms  1.948 ms  1.729 ms

 3  
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183943459&sdata=C5IA5JYNb%2FoOmwpjTmkc2rHRADGdpLe8vv6yN99Dnj4%3D&reserved=0>
 (146.151.168.4)  1.804 ms  1.788 ms  1.849 ms

 4  
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183953459&sdata=3z3Mylqkjc2VKLGx%2FKnmAJpRJS7IVt%2BaGQsvl%2FuBWQ0%3D&reserved=0>
 (146.151.166.122)  2.011 ms  2.004 ms  1.982 ms

 5  144.92.254.229 (144.92.254.229)  6.241 ms  6.240 ms  6.220 ms

 6  
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183953459&sdata=5mZjKypjxbVHIDeQnK7LQCClyhfANl%2BS2GeeZrrzhe0%3D&reserved=0>
 (162.252.70.60)  23.042 ms  23.072 ms  23.033 ms

 7  
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183963468&sdata=pA84gMq%2BkmUuDBAtgSh%2FA62akJd31FDUoGy87NnQomA%3D&reserved=0>
 (206.196.177.2)  24.094 ms  24.398 ms  24.370 ms

 8  
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183963468&sdata=LOWmiB3huZnPhflhxlW%2BZkHd7obXWQFd1lBPs%2Bpcw7U%3D&reserved=0>
 (206.196.177.189)  24.166 ms  24.166 ms  24.108 ms

 9  
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183973473&sdata=aKqZXvSQNnizSfE0uBiUqPQci1X97WDpeEBXnyLRifU%3D&reserved=0>
 (137.75.68.4)  24.056 ms  24.306 ms  24.215 ms

10  140.208.63.30 (140.208.63.30)  25.199 ms  25.284 ms  25.351 ms

11  140.90.76.65 (140.90.76.65)  118.314 ms  118.707 ms  118.768 ms



2236 UTC Tuesday Apr 16



traceroute -p 388 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183983482&sdata=KwPTgullyoXxSGI8GCQ3qdmNb7cSriQTL%2BCbY%2BgFeqM%3D&reserved=0>

traceroute to 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183983482&sdata=KwPTgullyoXxSGI8GCQ3qdmNb7cSriQTL%2BCbY%2BgFeqM%3D&reserved=0>
 (140.90.101.42), 30 hops max, 60 byte packets

 1  
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183993487&sdata=H1AQ1QqR4kyyRU4YtsBl1oGMnX1iGVK3ngUCI8jm8yc%3D&reserved=0>
 (144.92.130.1)  0.918 ms  0.736 ms  0.864 ms

 2  128.104.4.129 (128.104.4.129)  1.517 ms  1.630 ms  1.734 ms

 3  
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314183993487&sdata=DYIQDdUWlUrQBoQYgk9OpoA55DfECqQPqk%2FyT2HV12U%3D&reserved=0>
 (146.151.168.4)  1.998 ms  3.437 ms  3.437 ms

 4  
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184003496&sdata=ACIfAHsqatzgB%2FvOPUTUJY9bjlGLDgSauAbcfhl4Fv4%3D&reserved=0>
 (146.151.166.122)  1.899 ms  1.896 ms  1.867 ms

 5  144.92.254.229 (144.92.254.229)  6.384 ms  6.317 ms  6.314 ms

 6  
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184003496&sdata=44FNdozlet4IifCq6xQt9sal1nswlasTIr3SmXCzd2I%3D&reserved=0>
 (162.252.70.60)  22.980 ms  23.167 ms  23.078 ms

 7  
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184013506&sdata=uwRPEUJmHQURuMqYAhV9mknw1EOK%2Bz2%2FUP8UkUwt%2B%2Bg%3D&reserved=0>
 (206.196.177.2)  24.181 ms  24.152 ms  24.121 ms

 8  
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184013506&sdata=4HxTnNlC92v4xy8LDzBtRTWPJEjgvMT6aHi0HV0bWHE%3D&reserved=0>
 (206.196.177.189)  48.556 ms  47.824 ms  47.799 ms

 9  
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184023511&sdata=itAEbG9y3M3HCrqqtNSrhqSYcELx2C%2B6Fdw0UMZOK3k%3D&reserved=0>
 (137.75.68.4)  24.166 ms  24.154 ms  24.214 ms

10  140.208.63.30 (140.208.63.30)  25.310 ms  25.268 ms  25.401 ms

11  140.90.76.65 (140.90.76.65)  118.299 ms  123.763 ms  122.207 ms



2242 UTC



traceroute -p 388 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184033515&sdata=SC%2BrbUq7V2hLRqDNs89fT0MRqMR3SHxlEDyObtUgv3U%3D&reserved=0>

traceroute to 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184033515&sdata=SC%2BrbUq7V2hLRqDNs89fT0MRqMR3SHxlEDyObtUgv3U%3D&reserved=0>
 (140.90.101.42), 30 hops max, 60 byte packets

 1  
vlan-510-cssc-gw.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvlan-510-cssc-gw.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184043525&sdata=%2F2qMqDt9tGswPef%2BlHU44W18Zh910v2ADsvicC8Gzqo%3D&reserved=0>
 (144.92.130.1)  1.337 ms  1.106 ms  1.285 ms

 2  128.104.4.129 (128.104.4.129)  6.039 ms  5.778 ms  1.813 ms

 3  
rx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-cssc-b380-1-core-bundle-ether2-1521.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184043525&sdata=%2FsgbVlHAUElUGv%2B%2Bq2sonN6hJg6DZSSog7PjO8MOBT4%3D&reserved=0>
 (146.151.168.4)  2.275 ms  2.464 ms  2.517 ms

 4  
rx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frx-animal-226-2-core-bundle-ether1-1928.net.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184053530&sdata=osW8svh65Ay5lcZRIMXs2ALTKt%2BjDRkMYJDvzt%2Ba0SY%3D&reserved=0>
 (146.151.166.122)  2.288 ms  6.978 ms  3.506 ms

 5  144.92.254.229 (144.92.254.229)  10.369 ms  6.626 ms  10.281 ms

 6  
et-1-1-5.4079.rtsw.ashb.net.internet2.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtsw.ashb.net.internet2.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184053530&sdata=A7hbn9bUx5xFYzl5bL06hgp3sJvoEdHwQE7GZgkypNI%3D&reserved=0>
 (162.252.70.60)  23.513 ms  23.297 ms  23.295 ms

 7  
et-11-3-0-1275.clpk-core.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fet-11-3-0-1275.clpk-core.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184063543&sdata=TL2DSGW9LF7MDqiDpHPTUJ4MebHuK5XMXQBFz3Y95UM%3D&reserved=0>
 (206.196.177.2)  27.938 ms  24.589 ms  28.783 ms

 8  
nwave-clpk-re.demarc.maxgigapop.net<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnwave-clpk-re.demarc.maxgigapop.net&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184073553&sdata=TzVs%2FS0dKZ6%2BSet9uFLsSIGqjzSpa5PNGAAJuDUcjjg%3D&reserved=0>
 (206.196.177.189)  28.796 ms  24.630 ms  28.793 ms

 9  
ae-2.666.rtr.clpk.nwave.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtr.clpk.nwave.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184073553&sdata=WpDWq27Ytz4InOoLnHXZSleOXBd8tUy%2FhsH0pMxTz%2Bk%3D&reserved=0>
 (137.75.68.4)  24.576 ms  24.545 ms  24.587 ms

10  140.208.63.30 (140.208.63.30)  85.763 ms  85.768 ms  83.623 ms

11  140.90.76.65 (140.90.76.65)  131.912 ms  132.662 ms  132.340 ms



Pete



--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Tuesday, April 16, 2019 3:04 PM
To: Gilbert Sebenste; Tyle, Kevin R
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; Derek VanPelt - NOAA Affiliate; Mike Zuranski; Dustin 
Sheffler - NOAA Federal; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



At UW-Madison, we had incomplete 12 UTC GFS data starting with the 177h 
forecast. Lags exceeded 3600s.





Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Gilbert Sebenste 
<gilbert@xxxxxxxxxxxxxxxx<mailto:gilbert@xxxxxxxxxxxxxxxx>>
Sent: Tuesday, April 16, 2019 2:44 PM
To: Tyle, Kevin R
Cc: Pete Pokrandt; Dustin Sheffler - NOAA Federal; Mike Zuranski; Derek VanPelt 
- NOAA Affiliate; Kevin Goebbert; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



Yes, here at AllisonHouse too...we can feed from a number of sites, and all of 
them were dropping GFS, and delayed by an hour.



Gilbert

On Apr 16, 2019, at 2:39 PM, Tyle, Kevin R 
<ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>> wrote:

For what it's worth, our 12Z GFS data ingest was quite bad today ... many lost 
products beyond F168 (we feed from UWisc-MSN primary and PSU secondary) .



_____________________________________________
Kevin Tyle, M.S.; Manager of Departmental Computing
Dept. of Atmospheric & Environmental Sciences
University at Albany
Earth Science 235, 1400 Washington Avenue
Albany, NY 12222
Email: ktyle@xxxxxxxxxx<mailto:ktyle@xxxxxxxxxx>
Phone: 518-442-4578
_____________________________________________

________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Tuesday, April 16, 2019 12:00 PM
To: Dustin Sheffler - NOAA Federal; Mike Zuranski
Cc: Kevin Goebbert; conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
Derek VanPelt - NOAA Affiliate; _NCEP.List.pmb-dataflow; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



All,

Just keeping this in the foreground.



CONDUIT lags continue to be very large compared to what they were previous to 
whatever changed back in February. Prior to that, we rarely saw lags more than 
~300s. Now they are routinely 1500-2000s at UW-Madison and Penn State, and  
over 3000s at Unidata - they appear to be on the edge of losing data. This does 
not bode well with all of the IDP applications failing back over to CP today..



Can we send you some traceroutes and you back to us to maybe try to isolate 
where in the network this is happening? It feels like congestion or a bad route 
somewhere - the lags seem to be worse on weekdays than weekends if that helps 
at all.



Here are the current CONDUIT lags to UW-Madison, Penn State and Unidata.



<iddstats_CONDUIT_idd_aos_wisc_edu_ending_20190416_1600UTC.gif>



<iddstats_CONDUIT_idd_meteo_psu_edu_ending_20190416_1600UTC.gif>



<iddstats_CONDUIT_conduit_unidata_ucar_edu_ending_20190416_1600UTC.gif>









Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Dustin Sheffler - NOAA Federal 
<dustin.sheffler@xxxxxxxx<mailto:dustin.sheffler@xxxxxxxx>>
Sent: Tuesday, April 9, 2019 12:52 PM
To: Mike Zuranski
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; Derek VanPelt - NOAA Affiliate; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



Hi Mike,



Thanks for the feedback on NOMADS. We recently found a slowness issue when 
NOMADS is running out of our Boulder data center that is being worked on by our 
teams now that NOMADS is live out of the College Park data center. It's hard 
sometimes to quantify whether slowness issues that are only being reported by a 
handful of users is a result of something wrong in our data center, a bad 
network path between a customer (possibly just from a particular region of the 
country) and our data center, a local issue on the customers' end, or any other 
reason that might cause slowness.



Conduit is only ever run from our College Park data center. It's slowness is 
not tied into the Boulder NOMADS issue, but it does seem to be at least a 
little bit tied to which of our data centers NOMADS is running out of. When 
NOMADS is in Boulder along with the majority of our other NCEP applications, 
the strain on the College Park data center is minimal and Conduit appears to be 
running better as a result. When NOMADS runs in College Park (as it has since 
late yesterday) there is more strain on the data center and Conduit appears 
(based on provided user graphs) to run a bit worse around peak model times as a 
result. These are just my observations and we are still investigating what may 
have changed that caused the Conduit latencies to appear in the first place so 
that we can resolve this potential constraint.



-Dustin



On Tue, Apr 9, 2019 at 4:28 PM Mike Zuranski 
<zuranski@xxxxxxxxxxxxxxx<mailto:zuranski@xxxxxxxxxxxxxxx>> wrote:

Hi everyone,



I've avoided jumping into this conversation since I don't deal much with 
Conduit these days, but Derek just mentioned something that I do have some 
applicable feedback on...

> Two items happened last night.  1. NOMADS was moved back to College Park...

We get nearly all of our model data via NOMADS.  When it switched to Boulder 
last week we saw a significant drop in download speeds, down to a couple 
hundred KB/s or slower.  Starting last night, we're back to speeds on the order 
of MB/s or tens of MB/s.  Switching back to College Park seems to confirm for 
me something about routing from Boulder was responsible.  But again this was 
all on NOMADS, not sure if it's related to happenings on Conduit.



When I noticed this last week I sent an email to 
sdm@xxxxxxxx<mailto:sdm@xxxxxxxx> including a traceroute taken at the time, let 
me know if you'd like me to find that and pass it along here or someplace else.



-Mike



======================

Mike Zuranski

Meteorology Support Analyst

College of DuPage - Nexlab

Weather.cod.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2FWeather.cod.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184103572&sdata=0uaJC4KGIIXmYoPN%2B5qJokZJVsNju521CBGCYo9mauo%3D&reserved=0>

======================





On Tue, Apr 9, 2019 at 10:51 AM Person, Arthur A. 
<aap1@xxxxxxx<mailto:aap1@xxxxxxx>> wrote:

Derek,



Do we know what change might have been made around February 10th when the 
CONDUIT problems first started happening?  Prior to that time, the CONDUIT feed 
had been very crisp for a long period of time.



Thanks...            Art





Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email:  aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:  
814-863-1563<callto:814-863-1563>



________________________________

From: Derek VanPelt - NOAA Affiliate 
<derek.vanpelt@xxxxxxxx<mailto:derek.vanpelt@xxxxxxxx>>
Sent: Tuesday, April 9, 2019 11:34 AM
To: Holly Uhlenhake - NOAA Federal
Cc: Carissa Klemmer - NOAA Federal; Person, Arthur A.; Pete Pokrandt; 
_NCEP.List.pmb-dataflow; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago



Hi all,



Two items happened last night.



1.   NOMADS was moved back to College Park, which means there was a lot more 
traffic going out which will have effect on the Conduit latencies.  We do not 
have a full load from the COllege Park Servers as many of the other 
applications are still running from Boulder, but NOMADS will certainly increase 
overall load.



2.   As Holly said, there were further issues delaying and changing the timing 
of the model output yesterday afternoon/evening.  I will be watching from our 
end, and monitoring the Unidata 48 hour graph (thank you for the link) 
throughout the day,



Please let us know if you have questions or more information to help us analyse 
what you are seeing.



Thank you,



Derek





On Tue, Apr 9, 2019 at 6:50 AM Holly Uhlenhake - NOAA Federal 
<holly.uhlenhake@xxxxxxxx<mailto:holly.uhlenhake@xxxxxxxx>> wrote:

Hi Pete,



We also had an issue on the supercomputer yesterday where several models going 
to conduit would have been stacked on top of each other instead of coming out 
in a more spread out fashion.  It's not inconceivable that conduit could have 
backed up working through the abnormally large glut of grib messages.    Are 
things better this morning at all?



Thanks,

Holly



On Tue, Apr 9, 2019 at 12:37 AM Pete Pokrandt 
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:

Something changed starting with today's 18 UTC model cycle, and our lags shot 
up to over 3600 seconds, where we started losing data. They are growing again 
now with the 00 UTC cycle as well. PSU and Unidata CONDUIT stats show similar 
abnormally large lags.



FYI.
Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Friday, April 5, 2019 2:10 PM
To: Carissa Klemmer - NOAA Federal
Cc: Pete Pokrandt; Derek VanPelt - NOAA Affiliate; Gilbert Sebenste; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: Large lags on CONDUIT feed - started a week or so ago



Carissa,



The Boulder connection is definitely performing very well for CONDUIT.  
Although there have been a couple of little blips (~ 120 seconds) since 
yesterday, overall the performance is superb.  I don't think it's quite as 
clean as prior to the ~February 10th date when the D.C. connection went bad, 
but it's still excellent performance.  Here's our graph now with a single 
connection (no splits):

<pastedImage.png>

My next question is:  Will CONDUIT stay pointing at Boulder until D.C. is 
fixed, or might you be required to switch back to D.C. at some point before 
that?



Thanks...               Art



Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email:  aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:  
814-863-1563<callto:814-863-1563>



________________________________

From: Carissa Klemmer - NOAA Federal 
<carissa.l.klemmer@xxxxxxxx<mailto:carissa.l.klemmer@xxxxxxxx>>
Sent: Thursday, April 4, 2019 6:22 PM
To: Person, Arthur A.
Cc: Pete Pokrandt; Derek VanPelt - NOAA Affiliate; Gilbert Sebenste; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: Large lags on CONDUIT feed - started a week or so ago



Catching up here.



Derek,

Do we have traceroutes from all users? Does anything in VCenter show any system 
resource constraints?

On Thursday, April 4, 2019, Person, Arthur A. 
<aap1@xxxxxxx<mailto:aap1@xxxxxxx>> wrote:

Yeh, definitely looks "blipier" starting around 7Z this morning, but nothing 
like it was before.  And all last night was clean.  Here's our graph with a 
2-way split, a huge improvement over what it was before the switch to Boulder:



Agree with Pete that this morning's data probably isn't a good test since there 
were other factors.  Since this seems so much better, I'm going to try 
switching to no split as an experiment and see how it holds up.



                        Art



Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email:  aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:  
814-863-1563<callto:814-863-1563>



________________________________

From: Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Thursday, April 4, 2019 1:51 PM
To: Derek VanPelt - NOAA Affiliate
Cc: Person, Arthur A.; Gilbert Sebenste; Anne Myckow - NOAA Affiliate; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] [conduit] Large lags on CONDUIT feed - 
started a week or so ago



Ah, so perhaps not a good test.. I'll set it back to a 5-way split and see how 
it looks tomorrow.



Thanks for the info,

Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Derek VanPelt - NOAA Affiliate 
<derek.vanpelt@xxxxxxxx<mailto:derek.vanpelt@xxxxxxxx>>
Sent: Thursday, April 4, 2019 12:38 PM
To: Pete Pokrandt
Cc: Person, Arthur A.; Gilbert Sebenste; Anne Myckow - NOAA Affiliate; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] [conduit] Large lags on CONDUIT feed - 
started a week or so ago



HI Pete -- we did have a separate issu hit the CONDUIT feed today.  We should 
be recovering now, but the backlog was sizeable.  If these numbers are not back 
to the baseline in the next hour or so please let us know.  We are also 
watching our queues and they are decreasing, but not as quickly as we had hoped.



Thank you,



Derek



On Thu, Apr 4, 2019 at 1:26 PM 'Pete Pokrandt' via _NCEP list.pmb-dataflow 
<ncep.list.pmb-dataflow@xxxxxxxx<mailto:ncep.list.pmb-dataflow@xxxxxxxx>> wrote:

FYI - there is still a much larger lag for the 12 UTC run with a 5-way split 
compared to a 10-way split. It's better since everything else failed over to 
Boulder, but I'd venture to guess that's not the root of the problem.





Prior to whatever is going on to cause this, I don'r recall ever seeing lags 
this large with a 5-way split. It looked much more like the left hand side of 
this graph, with small increases in lag with each 6 hourly model run cycle, but 
more like 100 seconds vs the ~900 that I got this morning.

FYI I am going to change back to a 10 way split for now.



Pete







--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, April 3, 2019 4:57 PM
To: Person, Arthur A.; Gilbert Sebenste; Anne Myckow - NOAA Affiliate
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



Sorry, was out this morning and just had a chance to look into this. I concur 
with Art and Gilbert that things appear to have gotten better starting with the 
failover of everything else to Boulder yesterday. I will also reconfigure to go 
back to a 5-way split (as opposed to the 10-way split that I've been using 
since this issue began) and keep an eye on tomorrow's 12 UTC model run cycle - 
if the lags go up, it usually happens worst during that cycle, shortly before 
18 UTC each day.



I'll report back tomorrow how it looks, or you can see at



http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184123581&sdata=HK9LCR0z74qJqCNtQOjpjbMF%2Bfh8ic3yK6sEW03R99I%3D&reserved=0>



Thanks,

Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Wednesday, April 3, 2019 4:04 PM
To: Gilbert Sebenste; Anne Myckow - NOAA Affiliate
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



Anne,



I'll hop back in the loop here... for some reason these replies started going 
into my junk file (bleh).  Anyway, I agree with Gilbert's assessment.  Things 
turned real clean around 12Z yesterday, looking at the graphs.  I usually look 
at 
flood.atmos.uiuc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fflood.atmos.uiuc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184133586&sdata=1ni%2BckWc2APqvKgHDt2wNrI%2BkA%2FGNUtSv4oLcVJA4Ck%3D&reserved=0>
 when there are problem as their connection always seems to be the cleanest.  
If there are even small blips or ups and downs in their latencies, that usually 
means there's a network aberration somewhere that usually amplifies into 
hundreds or thousands of seconds at our site and elsewhere.  Looking at their 
graph now, you can see the blipiness up until 12Z yesterday, and then it's flat 
(except for the one spike around 16Z today which I would ignore):



<pastedImage.png>

Our direct-connected site, which is using a 10-way split right now, also shows 
a return to calmness in the latencies:

Prior to the recent latency jump, I did not use split requests and the 
reception had been stellar for quite some time.  It's my suspicion that this is 
a networking congestion issue somewhere close to the source since it seems to 
affect all downstream sites.  For that reason, I don't think solving this 
problem should necessarily involve upgrading your server software, but rather 
identifying what's jamming up the network near D.C., and testing this by 
switching to Boulder was an excellent idea.  I will now try switching our 
system to a two-way split to see if this performance holds up with fewer pipes. 
 Thanks for your help and I'll let you know what I find out.



                                 Art



Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email:  aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:  
814-863-1563<callto:814-863-1563>



________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Gilbert Sebenste 
<gilbert@xxxxxxxxxxxxxxxx<mailto:gilbert@xxxxxxxxxxxxxxxx>>
Sent: Wednesday, April 3, 2019 4:07 PM
To: Anne Myckow - NOAA Affiliate
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - 
started a week or so ago



Hello Anne,

I'll jump in here as well. Consider the CONDUIT delays at UNIDATA:

http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+conduit.unidata.ucar.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bconduit.unidata.ucar.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184143595&sdata=OIWHqPDbe776gbb83JcUsKq%2FEHBoR%2BhRz5EJ%2FOEkSI0%3D&reserved=0>

And now, Wisconsin:

http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184143595&sdata=ZWckXb61%2BQYZMrNnDqmi6jTz5Bl3OZ0gtwOOyeQmJZU%3D&reserved=0>

And finally, the University of Washington:

http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+freshair1.atmos.washington.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bfreshair1.atmos.washington.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184153600&sdata=5jsKWsZnTYSN95IRq%2FnAyGX081c%2Bzk3Bn8yuKzekXaI%3D&reserved=0>

All three of whom have direct feeds from you. Flipping over to Boulder 
definitely caused a major improvement. There was still a brief spike in delay, 
but much shorter and minimal

compared to what it was.

Gilbert



On Wed, Apr 3, 2019 at 10:03 AM Anne Myckow - NOAA Affiliate 
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>> wrote:

Hi Pete,



As of yesterday we failed almost all of our applications to our site in Boulder 
(meaning away from CONDUIT). Have you noticed an improvement in your speeds 
since yesterday afternoon? If so this will give us a clue that maybe there's 
something interfering on our side that isn't specifically CONDUIT, but another 
app that might be causing congestion. (And if it's the same then that's a clue 
in the other direction.)



Thanks,

Anne



On Mon, Apr 1, 2019 at 3:24 PM Pete Pokrandt 
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:

The lag here at UW-Madison was up to 1200 seconds today, and that's with a 
10-way split feed. Whatever is causing the issue has definitely not been 
resolved, and historically is worse during the work week than on the weekends. 
If that helps at all.



Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Anne Myckow - NOAA Affiliate 
<anne.myckow@xxxxxxxx<mailto:anne.myckow@xxxxxxxx>>
Sent: Thursday, March 28, 2019 4:28 PM
To: Person, Arthur A.
Cc: Carissa Klemmer - NOAA Federal; Pete Pokrandt; _NCEP.List.pmb-dataflow; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - started a 
week or so ago



Hello Art,



We will not be upgrading to version 6.13 on these systems as they are not 
robust enough to support the local logging inherent in the new version.



I will check in with my team on if there are any further actions we can take to 
try and troubleshoot this issue, but I fear we may be at the limit of our 
ability to make this better.



I’ll let you know tomorrow where we stand. Thanks.

Anne



On Mon, Mar 25, 2019 at 3:00 PM Person, Arthur A. 
<aap1@xxxxxxx<mailto:aap1@xxxxxxx>> wrote:

Carissa,



Can you report any status on this inquiry?



Thanks...          Art



Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email:  aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:  
814-863-1563<callto:814-863-1563>



________________________________

From: Carissa Klemmer - NOAA Federal 
<carissa.l.klemmer@xxxxxxxx<mailto:carissa.l.klemmer@xxxxxxxx>>
Sent: Tuesday, March 12, 2019 8:30 AM
To: Pete Pokrandt
Cc: Person, Arthur A.; 
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>; 
_NCEP.List.pmb-dataflow
Subject: Re: Large lags on CONDUIT feed - started a week or so ago



Hi Everyone



I’ve added the Dataflow team email to the thread. I haven’t heard that any 
changes were made or that any issues were found. But the team can look today 
and see if we have any signifiers of overall slowness with anything.



Dataflow, try taking a look at the new Citrix or VM troubleshooting tools if 
there are any abnormal signatures that may explain this.

On Monday, March 11, 2019, Pete Pokrandt 
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:

Art,

I don't know if NCEP ever figured anything out, but I've been able to keep my 
latencies reasonable (300-600s max, mostly during the 12 UTC model suite) by 
splitting my CONDUIT request 10 ways, instead of the 5 that I had been doing, 
or in a single request. Maybe give that a try and see if it helps at all.



Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: Person, Arthur A. <aap1@xxxxxxx<mailto:aap1@xxxxxxx>>
Sent: Monday, March 11, 2019 3:45 PM
To: Holly Uhlenhake - NOAA Federal; Pete Pokrandt
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago



Holly,



Was there any resolution to this on the NCEP end?  I'm still seeing terrible 
delays (1000-4000 seconds) receiving data from 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184173614&sdata=i00yaZEaSt2jVFKFCmjgtYWnw0f8CH7bFRqhElBMNn4%3D&reserved=0>.
  It would be helpful to know if things are resolved at NCEP's end so I know 
whether to look further down the line.



Thanks...           Art



Arthur A. Person
Assistant Research Professor, System Administrator
Penn State Department of Meteorology and Atmospheric Science
email:  aap1@xxxxxxx<mailto:aap1@xxxxxxx>, phone:  
814-863-1563<callto:814-863-1563>



________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Holly Uhlenhake - NOAA Federal 
<holly.uhlenhake@xxxxxxxx<mailto:holly.uhlenhake@xxxxxxxx>>
Sent: Thursday, February 21, 2019 12:05 PM
To: Pete Pokrandt
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago



Hi Pete,



We'll take a look and see if we can figure out what might be going on.  We 
haven't done anything to try and address this yet, but based on your analysis 
I'm suspicious that it might be tied to a resource constraint on the VM or the 
blade it resides on.



Thanks,

Holly Uhlenhake

Acting Dataflow Team Lead



On Thu, Feb 21, 2019 at 11:32 AM Pete Pokrandt 
<poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>> wrote:

Just FYI, data is flowing, but the large lags continue.



http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184173614&sdata=%2FLWFG0BzQJKstNUL8banvPbRHJMOov7P6UOKGlaPhjs%3D&reserved=0>

http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+conduit.unidata.ucar.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bconduit.unidata.ucar.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184183623&sdata=SE2C9NOEgOCjI4f7KIyio3kFvy7htpTDjMv2IUREDVQ%3D&reserved=0>



Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, February 20, 2019 12:07 PM
To: Carissa Klemmer - NOAA Federal
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago



Data is flowing again - picked up somewhere in the GEFS. Maybe CONDUIT server 
was restarted, or ldm on it? Lags are large (3000s+) but dropping slowly

Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, February 20, 2019 11:56 AM
To: Carissa Klemmer - NOAA Federal
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago



Just a quick follow-up - we started falling far enough behind (3600+ sec) that 
we are losing data. We got short files starting at 174h into the GFS run, and 
only got (incomplete) data through 207h.



We have now not received any data on CONDUIT since 11:27 AM CST (1727 UTC) 
today (Wed Feb 20)



Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>



________________________________

From: conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx> 
<conduit-bounces@xxxxxxxxxxxxxxxx<mailto:conduit-bounces@xxxxxxxxxxxxxxxx>> on 
behalf of Pete Pokrandt <poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>>
Sent: Wednesday, February 20, 2019 11:28 AM
To: Carissa Klemmer - NOAA Federal
Cc: conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>; 
support-conduit@xxxxxxxxxxxxxxxx<mailto:support-conduit@xxxxxxxxxxxxxxxx>
Subject: [conduit] Large lags on CONDUIT feed - started a week or so ago



Carissa,



We have been feeding CONDUIT using a 5 way split feed direct from 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184203642&sdata=BNLaISNgWwAI%2Bl2OfiyKeq%2Bb3ACDWbC%2BGKFuDtgWo2U%3D&reserved=0>,
 and it had been really good for some time, lags 30-60 seconds or less.



However, the past week or so, we've been seeing some very large lags during 
each 6 hour model suite - Unidata is also seeing these - they are also feeding 
direct from 
conduit.ncep.noaa.gov<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fconduit.ncep.noaa.gov&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184213647&sdata=QGNKnEZ1IDku7WTcCNqoL4IfgNwSRrjFtuYyfLnJyaw%3D&reserved=0>.



http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bidd.aos.wisc.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184213647&sdata=%2BxmW9s5dzkpodrf6ngM5Zj7QaAJ66%2BiZgiBFvdqh7DE%3D&reserved=0>



http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+conduit.unidata.ucar.edu<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Frtstats.unidata.ucar.edu%2Fcgi-bin%2Frtstats%2Fiddstats_nc%3FCONDUIT%2Bconduit.unidata.ucar.edu&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184223652&sdata=TE1xG88wveMcnhHelw1YzW%2F%2FgPWkkZuGLcLR2vFU4BM%3D&reserved=0>





Any idea what's going on, or how we can find out?



Thanks!

Pete





--
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - poker@xxxxxxxxxxxx<mailto:poker@xxxxxxxxxxxx>

_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web.  Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.


conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit: 
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184233657&sdata=r%2BWGrkfNfkeBLPLGlmVxzU1fnq1VorIk8CyB74oba0I%3D&reserved=0>


--

Carissa Klemmer
NCEP Central Operations
IDSB Branch Chief

301-683-3835



_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184233657&sdata=3omCFLJDY%2FnkRpLYkIkDsNsQ9kk0x6ZTkCsbOOXTsq4%3D&reserved=0>

--

Anne Myckow

Lead Dataflow Analyst

NOAA/NCEP/NCO
301-683-3825




--

Anne Myckow

Lead Dataflow Analyst

NOAA/NCEP/NCO
301-683-3825

_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web.  Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.


conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit: 
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184243666&sdata=ntKKdKqh4%2Bkt13sE%2FZON3IpPen9u5NKvWlLtODCEsLY%3D&reserved=0>




--

----



Gilbert Sebenste

Consulting Meteorologist

AllisonHouse, LLC

_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184243666&sdata=%2BX2Kcs8v%2BufeYfEw81XngOWvQaIM7j5pmR%2F9UDAwTaU%3D&reserved=0>


--

Derek Van Pelt

DataFlow Analyst

NOAA/NCEP/NCO


--

Carissa Klemmer
NCEP Central Operations
IDSB Branch Chief

301-683-3835



_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web.  Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.


conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit: 
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184253671&sdata=kBT3Golbpb2Mk28tpRqjOfflLCOIRaNdRcQSbwbcvdM%3D&reserved=0>


--

Derek Van Pelt

DataFlow Analyst

NOAA/NCEP/NCO

--

Misspelled straight from Derek's phone.

_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web.  Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.


conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit: 
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184263675&sdata=mHxC90pj3NXg%2Bcs6ya%2FgEPefGm5mQo8wnReRqmKKJys%3D&reserved=0>

_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lstsrv.ncep.noaa.gov%2Fmailman%2Flistinfo%2Fncep.list.pmb-dataflow&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184263675&sdata=Vf7aah0u1vL4yUbxbozaMZJ%2B1mNEHKzSqt3jONrLgh8%3D&reserved=0>


--

Dustin Sheffler

NCEP Central Operations - Dataflow
5830 University Research Court, Rm 1030
College Park, Maryland 20740

Office: (301) 683-3827<tel:%28301%29%20683-1400>

_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web.  Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.


conduit mailing list
conduit@xxxxxxxxxxxxxxxx<mailto:conduit@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe, visit: 
http://www.unidata.ucar.edu/mailing_lists/<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unidata.ucar.edu%2Fmailing_lists%2F&data=02%7C01%7Caap1%40psu.edu%7Cb0cdc933f419440deecb08d6c548e180%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C636913314184273689&sdata=0vRVOnAu9LMrNiD%2FRSi06AfEtpW8VNb69sv1m0NEdXQ%3D&reserved=0>




--

Derek Van Pelt

DataFlow Analyst

NOAA/NCEP/NCO

_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow


--

Anne Myckow

Lead Dataflow Analyst

NOAA/NCEP/NCO
301-683-3825




--

Anne Myckow

Lead Dataflow Analyst

NOAA/NCEP/NCO
301-683-3825




--

Anne Myckow

Lead Dataflow Analyst

NOAA/NCEP/NCO
301-683-3825

_______________________________________________
Ncep.list.pmb-dataflow mailing list
Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx<mailto:Ncep.list.pmb-dataflow@xxxxxxxxxxxxxxxxxxxx>
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow


--

Dustin Sheffler

NCEP Central Operations - Dataflow
5830 University Research Court, Rm 1030
College Park, Maryland 20740

Office: (301) 683-3827<tel:%28301%29%20683-1400>




--

Anne Myckow

Lead Dataflow Analyst

NOAA/NCEP/NCO
301-683-3825

GIF image

  • 2019 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the conduit archives: