[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [conduit] [Ncep.list.pmb-dataflow] Large lags on CONDUIT feed - started a week or so ago



Hi all,

I have been reviewing the latency of the three conduit feeds this
morning (image attached for March 7 through 13, latency in minutes on
the y axis).  It appears that the Unidata feed has a lot of latency
(up to 50 minutes) showing up at model output hours (~5/11/17/23), but
the other two feeds are showing maximum latency of less than 6 or 7
minutes since the 9th.   I can see the "agg" server had  closer to a
15 minute latency on the 8th.

I can also verify at this point that each server is splitting the
request into 9 or 10 well balanced requests.

Putting all these together makes me think that the server here is not
the (main) issue, and I am encouraged that splitting the requests has
appeared to help.  Please let me know whether this matches your
experience, and how/if we proceed further?

It may be worth discussing whether crossfeeding each other may help you all .

Thank you ,

Derek







On Tue, Mar 12, 2019 at 4:18 PM Person, Arthur A. <address@hidden> wrote:
>
> With the conduit request split into 10 threads for this morning's 12Z data, I 
> think there was some improvement, though not conclusive. Our delay peaked at 
> about 2200 seconds which was much better than yesterday, but about the same 
> as on the 10th.  However, I think the delays today at other sites were worse 
> than the recent average, which is why I think it was an improvement.
>
>
> A traceroute shows our connection going through Chicago 
> (rtsw.chic.net.internet2.edu) with an RTT of about 16 ms and then through 
> Ashburn (rtsw.ashb.net.internet2.edu) with an RTT of about 32 ms, and finally 
> into the Max Gigapop (clpk-core.maxgigapop.net) with an RTT of about 34 ms.  
> Anyone follow a similar route to this that you could share some times on?
>
>
>                     Art
>
>
> Arthur A. Person
> Assistant Research Professor, System Administrator
> Penn State Department of Meteorology and Atmospheric Science
> email:  address@hidden, phone:  814-863-1563
>
>
>
> ________________________________
> From: address@hidden <address@hidden> on behalf of Person, Arthur A. 
> <address@hidden>
> Sent: Tuesday, March 12, 2019 10:22 AM
> To: Carissa Klemmer - NOAA Federal; Pete Pokrandt
> Cc: address@hidden; _NCEP.List.pmb-dataflow; address@hidden
> Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
>
>
> Thanks Carissa.
>
>
> Pete, I was running a single request until recently, but split it into 5 a 
> couple weeks ago with the results I reported below.  I'll try 10 and see if 
> that helps further.  It's been my experience that when the latencies go bad, 
> they are usually worse getting to us than other sites, not sure why.  Perhaps 
> the route, or proximity to I2 backbone...?  Things had been great for months 
> until approximately February 10th, give or take a few days.  Since then, the 
> latencies have been terrible.
>
>
>                   Art
>
>
> Arthur A. Person
> Assistant Research Professor, System Administrator
> Penn State Department of Meteorology and Atmospheric Science
> email:  address@hidden, phone:  814-863-1563
>
>
>
> ________________________________
> From: Carissa Klemmer - NOAA Federal <address@hidden>
> Sent: Tuesday, March 12, 2019 8:30 AM
> To: Pete Pokrandt
> Cc: Person, Arthur A.; address@hidden; address@hidden; _NCEP.List.pmb-dataflow
> Subject: Re: Large lags on CONDUIT feed - started a week or so ago
>
> Hi Everyone
>
> I’ve added the Dataflow team email to the thread. I haven’t heard that any 
> changes were made or that any issues were found. But the team can look today 
> and see if we have any signifiers of overall slowness with anything.
>
> Dataflow, try taking a look at the new Citrix or VM troubleshooting tools if 
> there are any abnormal signatures that may explain this.
>
> On Monday, March 11, 2019, Pete Pokrandt <address@hidden> wrote:
>
> Art,
>
> I don't know if NCEP ever figured anything out, but I've been able to keep my 
> latencies reasonable (300-600s max, mostly during the 12 UTC model suite) by 
> splitting my CONDUIT request 10 ways, instead of the 5 that I had been doing, 
> or in a single request. Maybe give that a try and see if it helps at all.
>
> Pete
>
>
> --
> Pete Pokrandt - Systems Programmer
> UW-Madison Dept of Atmospheric and Oceanic Sciences
> 608-262-3086  - address@hidden
>
>
> ________________________________
> From: Person, Arthur A. <address@hidden>
> Sent: Monday, March 11, 2019 3:45 PM
> To: Holly Uhlenhake - NOAA Federal; Pete Pokrandt
> Cc: address@hidden; address@hidden
> Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
>
>
> Holly,
>
>
> Was there any resolution to this on the NCEP end?  I'm still seeing terrible 
> delays (1000-4000 seconds) receiving data from conduit.ncep.noaa.gov.  It 
> would be helpful to know if things are resolved at NCEP's end so I know 
> whether to look further down the line.
>
>
> Thanks...           Art
>
>
> Arthur A. Person
> Assistant Research Professor, System Administrator
> Penn State Department of Meteorology and Atmospheric Science
> email:  address@hidden, phone:  814-863-1563
>
>
>
> ________________________________
> From: address@hidden <address@hidden> on behalf of Holly Uhlenhake - NOAA 
> Federal <address@hidden>
> Sent: Thursday, February 21, 2019 12:05 PM
> To: Pete Pokrandt
> Cc: address@hidden; address@hidden
> Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
>
> Hi Pete,
>
> We'll take a look and see if we can figure out what might be going on.  We 
> haven't done anything to try and address this yet, but based on your analysis 
> I'm suspicious that it might be tied to a resource constraint on the VM or 
> the blade it resides on.
>
> Thanks,
> Holly Uhlenhake
> Acting Dataflow Team Lead
>
> On Thu, Feb 21, 2019 at 11:32 AM Pete Pokrandt <address@hidden> wrote:
>
> Just FYI, data is flowing, but the large lags continue.
>
> http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu
> http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+conduit.unidata.ucar.edu
>
> Pete
>
>
> --
> Pete Pokrandt - Systems Programmer
> UW-Madison Dept of Atmospheric and Oceanic Sciences
> 608-262-3086  - address@hidden
>
>
> ________________________________
> From: address@hidden <address@hidden> on behalf of Pete Pokrandt 
> <address@hidden>
> Sent: Wednesday, February 20, 2019 12:07 PM
> To: Carissa Klemmer - NOAA Federal
> Cc: address@hidden; address@hidden
> Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
>
> Data is flowing again - picked up somewhere in the GEFS. Maybe CONDUIT server 
> was restarted, or ldm on it? Lags are large (3000s+) but dropping slowly
>
> Pete
>
>
> --
> Pete Pokrandt - Systems Programmer
> UW-Madison Dept of Atmospheric and Oceanic Sciences
> 608-262-3086  - address@hidden
>
>
> ________________________________
> From: address@hidden <address@hidden> on behalf of Pete Pokrandt 
> <address@hidden>
> Sent: Wednesday, February 20, 2019 11:56 AM
> To: Carissa Klemmer - NOAA Federal
> Cc: address@hidden; address@hidden
> Subject: Re: [conduit] Large lags on CONDUIT feed - started a week or so ago
>
> Just a quick follow-up - we started falling far enough behind (3600+ sec) 
> that we are losing data. We got short files starting at 174h into the GFS 
> run, and only got (incomplete) data through 207h.
>
> We have now not received any data on CONDUIT since 11:27 AM CST (1727 UTC) 
> today (Wed Feb 20)
>
> Pete
>
>
> --
> Pete Pokrandt - Systems Programmer
> UW-Madison Dept of Atmospheric and Oceanic Sciences
> 608-262-3086  - address@hidden
>
>
> ________________________________
> From: address@hidden <address@hidden> on behalf of Pete Pokrandt 
> <address@hidden>
> Sent: Wednesday, February 20, 2019 11:28 AM
> To: Carissa Klemmer - NOAA Federal
> Cc: address@hidden; address@hidden
> Subject: [conduit] Large lags on CONDUIT feed - started a week or so ago
>
> Carissa,
>
> We have been feeding CONDUIT using a 5 way split feed direct from 
> conduit.ncep.noaa.gov, and it had been really good for some time, lags 30-60 
> seconds or less.
>
> However, the past week or so, we've been seeing some very large lags during 
> each 6 hour model suite - Unidata is also seeing these - they are also 
> feeding direct from conduit.ncep.noaa.gov.
>
> http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd.aos.wisc.edu
>
> http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+conduit.unidata.ucar.edu
>
>
> Any idea what's going on, or how we can find out?
>
> Thanks!
> Pete
>
>
> --
> Pete Pokrandt - Systems Programmer
> UW-Madison Dept of Atmospheric and Oceanic Sciences
> 608-262-3086  - address@hidden
>
> _______________________________________________
> NOTE: All exchanges posted to Unidata maintained email lists are
> recorded in the Unidata inquiry tracking system and made publicly
> available through the web.  Users who post to any of the lists we
> maintain are reminded to remove any personal information that they
> do not want to be made public.
>
>
> conduit mailing list
> address@hidden
> For list information or to unsubscribe, visit: 
> http://www.unidata.ucar.edu/mailing_lists/
>
>
>
> --
> Carissa Klemmer
> NCEP Central Operations
> IDSB Branch Chief
> 301-683-3835
>
> _______________________________________________
> Ncep.list.pmb-dataflow mailing list
> address@hidden
> https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.pmb-dataflow



--
Derek Van Pelt
DataFlow Analyst
NOAA/NCEP/NCO

Attachment: conduit_latency.png
Description: PNG image

_______________________________________________
NOTE: All exchanges posted to Unidata maintained email lists are
recorded in the Unidata inquiry tracking system and made publicly
available through the web.  Users who post to any of the lists we
maintain are reminded to remove any personal information that they
do not want to be made public.


conduit mailing list
address@hidden
For list information or to unsubscribe, visit: 
http://www.unidata.ucar.edu/mailing_lists/