[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[LDM #JOF-421815]: Not receiving NEXRAD3 data...



Hi David,

re:
> First of all, I know that Natasha is on campus, but I am not sure what it is 
> pulling or
> where it is located!  It is a machine for an emeritus faculty member.

As to what it is REQUESTing, this can be seen in the real-time stats pages:

Unidata HomePage
http://www.unidata.ucar.edu

  Projects -> Internet Data Distribution
  http://www.unidata.ucar.edu/projects/index.html#idd

    IDD Current Operational Status
    http://www.unidata.ucar.edu/software/idd/rtstats/

      Statistics by Host
      http://www.unidata.ucar.edu/cgi-bin/rtstats/siteindex

        natasha.creighton.edu [6.10.1]
        
http://www.unidata.ucar.edu/cgi-bin/rtstats/siteindex?natasha.creighton.edu

As to where it is located, I would suggest Frostbite Falls ;-)

re:
> Second, I know
> not of this Boris of which you speak…although I have seen moose try to pull 
> rabbit from
> hat.  It never work.

Yes, but Bullwinkle _was_ able to pull Rocky out of a hat :O  Those were the 
days!

re:
> I used the watch command to create an output that was dumped to file so I 
> could see what
> was happening.  It appears that that latency issues started at 10:12AM.  By 
> 11:00AM, the
> feed was 4 minutes behind.  By 12:00, it was 13 minutes behind.  By 1:00PM, 
> 27 minutes
> behind. By 2:00PM, 40 minutes behind.  And at 2:55PM the feed was an hour 
> behind, and
> stopped.

One the latencies exceed the queue residency time in the upstream host, the
data will be lost.  This is what is happening for NEXRAD3 on whistler.

re:
> I will add these lines right now, and restart the ldm (actually I just did 
> this…so I
> will keep an eye out).

Excellent!  This is not necessarily the fix; it should indicate whether or
not there is per-connection "packet shaping" going on somewhere.  Since there
is no artificially-imposed limits here, any throttling would be at Creighton.

re:
> Apparently we had this type of issue before, as the CONDUIT was
> split into 5 parts due to latency issues (from the comments in the code).

It is almost the rule that sites distant from their upstream need to
split CONDUIT feed REQUESTs into fifths or even tenths.  The sites that
do not have to do this are directly typically directly on Internet2 and
have Gbps+ network access.

re:
> If this
> doesn't help, should I reconsider another feed (apparently we used 
> idd.unl.edu before).

I would be greatly surprised if switching to a different upstream
host would solve your problem, at least, in the long term.  I say
this because I believe that a comparison of the latencies seen for
feeds of greatly different volumes on whistler indicates that there
is some sort of "packet shaping" being done on a per-connection
basis.  I will be happy to be proven wrong on this, of course.

re:
> Thanks for looking into this…

No worries.

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: JOF-421815
Department: Support IDD
Priority: Normal
Status: Closed