[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 20001227: ldm/idd incredibly crappy (fwd)




Hi Bill, 

Sorry to hear of your persistent issues.

I checked the ..../Moscow URL for your network status, what size of a line
do you share? Or do you have a dedicated line?

http://unidata.ucar.edu/projects/idd/status/idd/fosTopo.html

shows

navier.meteo.psu.edu 0.6 Minutes
        cyclone.msrc.sunysb.edu 0.4 Minutes
        rossby.wcupa.edu 0.6 Minutes
        windfall.evsc.virginia.edu unknown
        nora-f.gsfc.nasa.gov unknown
        nora.gsfc.nasa.gov 0.6 Minutes
        orinoco.princeton.edu 59.5 Minutes
        catena.essc.psu.edu unknown
        cirrus.smsu.edu 0.9 Minutes

We do need to scour navier and change to navierldm, but that is not a real
issue.

Your FOS products seem to be arriving nicely currently.

Your ldmpings are definately in the right range...[.03-.1] 

By taking the reciprocal of this number you can determine the number of
RPC's that can be called per second.

I do notice some times when there are usage spikes, in the middle of the
day, that may be clogging your pipe. I would need to know what
arrangements you have made with your ISP regarding line size and
dedication of that line to make more sense of the graphs from..../Moscow.

Given the connection to navierldm, I am still leaning to bandwidth issues.

-Jeff
____________________________                  _____________________
Jeff Weber                                    address@hidden
Unidata Support                               PH:303-497-8676 
NWS-COMET Case Study Library                  FX:303-497-8690
University Corp for Atmospheric Research      3300 Mitchell Ln
http://www.unidata.ucar.edu/staff/jweber      Boulder,Co 80307-3000
________________________________________      ______________________

On Wed, 27 Dec 2000, Unidata Support wrote:

> 
> ------- Forwarded Message
> 
> >To: address@hidden
> >cc: address@hidden
> >From: address@hidden
> >Subject: ldm/idd sparse datae
> >Organization: UCAR/Unidata
> >Keywords: 200012271944.eBRJiwo26102
> 
> 
> Hi:
> 
> I'll start with apologies for the bother and putting this off.  Sorry,
> BUT our data feed continues to be incredibly crappy.  For example:
> 
> We have not received sufficient sounding data to make a 12Z map since
> November; Typically, during daylight hours, including weekends it
> seems, we get perhaps the 12Z, 16Z and 18Z surface observations.  We
> can go a week at a time without some of the longer-term gridded
> products (72, 120, 144 hours).  I have delayed contacting because I
> thought, perhaps, that end-of-semester utilization on campus here might
> have saturated our internet pipe...Our utilization can be seen at
> http://networking.smsu.edu/mrtg/html/Moscow.2.0.html...for a quick and
> dirty of the day's surface observations, the Mcidas meteorogram at
> http://cirrus.smsu.edu/home/mcidas/data/mc/METG.gif  provides a view.
> 
> From my end, traceroutes look good, our utilization looks fine.  I
> noted the discussion a month ago about latencies...although there were
> not hard and fast numbers mentioned in that discussion about what would
> be a "good" or "bad" response time from an ldmping, ours don't look
> awful:
> 
> cirrus:/home/ldm> ldmping navierldm.meteo.psu.edu
> Dec 27 19:37:20      State    Elapsed Port   Remote_Host           rpc_stat
> Dec 27 19:37:20 RESPONDING   0.307533  388   navierldm.meteo.psu.edu  
> Dec 27 19:37:45 RESPONDING   0.043275  388   navierldm.meteo.psu.edu  
> Dec 27 19:38:10 RESPONDING   0.153390  388   navierldm.meteo.psu.edu  
> Dec 27 19:38:35 RESPONDING   0.042435  388   navierldm.meteo.psu.edu  
> Dec 27 19:39:00 RESPONDING   0.041683  388   navierldm.meteo.psu.edu  
> 
> I did go looking at unidata's latency page and noted first, that for
> our feed site the listing was navier.meteo.psu.edu rather than
> navierldm...this changed some time back (to navierldm), but we
> (cirrus.smsu.edu) were not even listed in the feeds (of course we
> hadn't much data that day).
> 
> Now that we have semester break, I'd really like to get this data
> problem ironed out, but I'm afraid I'll have to ask you all to point me
> in the right direction.  I need a suggestion on where to start.
> 
> Thanks.
> 
> Bill Corcoran
> 
> 
> ------- End of Forwarded Message
> 
>