[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [conduit] Recent CONDUIT latencies?



Daryl,

I am ingesting and relaying the entire CONDUIT data feed - using 5 split feeds, e.g.

REQUEST         CONDUIT "[09]$" conduit.ncep.noaa.gov
REQUEST         CONDUIT "[18]$" conduit.ncep.noaa.gov
REQUEST         CONDUIT "[27]$" conduit.ncep.noaa.gov
REQUEST         CONDUIT "[36]$" conduit.ncep.noaa.gov
REQUEST         CONDUIT "[45]$" conduit.ncep.noaa.gov

I don't know specifically what the bandwidth per connection is, but looking at our CONDUIT volumes, we see peaks of ~55000 Mbytes/hr for the 00 UTC runs.

My math shows 55000 Megabytes/hr
x 8 bits/byte = 440,000 Megabits/hr
x 1 hr/3600 seconds = ~122 Mbits/sec avg for the entire CONDUIT feed, so splitting that up 5 ways ends up being ~25 Mbits/sec per connection.

Does this help? 

I have been watching the outbound bandwidth from my idd1 and idd2 backends (the ones that feed my downstreams) and I am seeing it peak out at almost 940 Mbit/sec on my 1 Gbps connections, so I'm definitely saturating on my outbounds during peak data flow. Of course, that is for all of the data streams including NGRID, NIMAGE, SATELLITE, etc..

I don't know what kind of lag this is introducing to my various downstreams - many of the downstreams showing up in my logs are not reporting stats to Unidata. I just checked UW-Milwaukee, who is feeding CONDUIT and other data exclusively from us, and their peak latencies for CONDUIT are up generally around 600 seconds, so higher than mine, but they shouldn't be losing any data. Their latencies from us for other feeds (NEXRAD3, NIMAGE, etc) are less than that.

I'm not sure what I can really do to alleviate this other than limit the number of feeds or get more backends. Don't have the funds for option #2 right now. It would be nice if the IDD operated more like a tree like the olden days, where many of the higher level relays fed a couple of downstreams, who fed a few more, down to the leaves, rather than everyone feeding from Unidata and a couple other top level nodes like it has become..

At any rate..

Oh, while I've got you on the horn, I wanted to ask if you would be interested in gempak files for surface/upperair obs for the time of the 2020 derecho? Some of our students were doing that as a case study this spring, and of course, when I went to mtarchive, it was missing a bunch of data during the time you were without power. I was able to get a raw DDPLUS feed from an archive at NCAR, I think, and passed it through dcmetr and dcuair and create some surface and upperair gempak files. I can make those available to you if you'd like to fill in the archive. Let me know!

Pete




-----
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - address@hidden



From: Herzmann, Daryl E [AGRON] <address@hidden>
Sent: Monday, May 17, 2021 10:51 AM
To: address@hidden <address@hidden>; Anne Myckow - NOAA Federal <address@hidden>; Pete Pokrandt <address@hidden>
Subject: Re: [conduit] Recent CONDUIT latencies?
 
Howdy Pete,

How heavily sharded (how many LDM FEED requests) are you making to get all the CONDUIT data that you need?  If you totaled up all your feed request volume and divided by the number of LDM requests, what rough number do you get for bandwidth per LDM TCP connection? :)

daryl

--
/**
 * daryl herzmann
 * Systems Analyst III -- Iowa Environmental Mesonet
 * https://mesonet.agron.iastate.edu
 */

________________________________________
From: conduit <address@hidden> on behalf of Pete Pokrandt via conduit <address@hidden>
Sent: Saturday, May 15, 2021 10:52 AM
To: address@hidden; Anne Myckow - NOAA Federal
Subject: Re: [conduit] Recent CONDUIT latencies?

Anne,

Over the past few days, our latencies have been in line with what we have come to expect - 30-60 seconds as the bursts of model forecast hours come through.

Here's the graph of our latencies the past ~3 days. That red blip around 15/06 looks like maybe a conduit ingest server's ldm was restarted? Other than that, they are pretty consistently 30-60s. This graph can be found at

https://rtstats.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+idd-agg.aos.wisc.edu


Pete

[cid:9377ebe1-5745-4dd7-bec0-98a991b681d2]




<http://www.weather.com/tv/shows/wx-geeks/video/the-incredible-shrinking-cold-pool>-----
Pete Pokrandt - Systems Programmer
UW-Madison Dept of Atmospheric and Oceanic Sciences
608-262-3086  - address@hidden

________________________________
From: conduit <address@hidden> on behalf of Anne Myckow - NOAA Federal via conduit <address@hidden>
Sent: Tuesday, May 11, 2021 2:57 PM
To: address@hidden <address@hidden>
Subject: [conduit] Recent CONDUIT latencies?

CONDUIT users,

Can you let us know what your latencies have been recently? We have had customers complaining about transfer rates on some of our systems and I want to see if it might be affecting you as well.

Thanks,
Anne
--
Anne Myckow
Dataflow Team Lead
NWS/NCEP/NCO