[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Datastream #IZJ-689237]: Additional Datafeeds

Hi Jeff,

> Since I'm still seeing the same "holes" in my data - just for giggles,
> is there another upstream host that I could try for HDS|IDS|DDPLUS|
> UNIWISC feed, rather than idd.unl.edu?

Yes.  You can feed from the toplevel IDD relay node that we maintain:


> I know that you indicated that
> if there was a problem with the data, others would be complaining of the
> same problems, but I'd like a backup feed anyway.   It would also remove
> that possibility for certain.

No worries.  This is a good test to run.

> Also, I re-setup my development ldm/gempak box this morning.  It's
> basically an old 32-bit desktop.  It is pulling its feed from Whistler
> and appears to be working (pulling in and processing data).   I wasn't
> done with it until almost noon, our time, so it hasn't been going for
> long yet.  Even in that amount of time, I think I'm seeing the same
> "holes".    I won't know 'til I let it run for awhile.


> I know the
> problem could be that maybe Whistler itself is hosing up the data, then
> passing it on to my dev box, but I wouldn't think so.

The only thing I could think of is that not all of the data are making it
to whistler.

> Would Whistler's
> ldm do anything to the data prior to sending it on to a downstream
> server?

There is nothing in the LDM that would "hose up" products that it is relaying.
The data movement portion of the LDM is data agnostic -- it doesn't know or
care what is in a product that it is sending/receiving. 

I see that you re-activated CONDUIT ingest on whistler.  The latencies being 
for CONDUIT are large enough that I would be worried that you are not receiving
all of the data in that datastream:


The latency for the HRS datastream is OK for the most part (except for a bad 
just before 12Z this morning:


The volume of data that you are now ingesting on whistler is substantial:


Data Volume Summary for whistler.creighton.edu

Maximum hourly volume   5126.965 M bytes/hour
Average hourly volume   2141.335 M bytes/hour

Average products per hour     115834 prods/hour

Feed                           Average             Maximum     Products
                     (M byte/hour)            (M byte/hour)   number/hour
CONDUIT                1653.302    [ 77.209%]     4604.865    47516.723
HDS                     213.580    [  9.974%]      424.758    18604.894
NEXRAD3                 121.615    [  5.679%]      156.800    20005.957
NIMAGE                   97.898    [  4.572%]      161.142       29.681
IDS|DDPLUS               31.478    [  1.470%]       50.763    29646.574
UNIWISC                  23.460    [  1.096%]       36.764       21.170
LIGHTNING                 0.003    [  0.000%]        0.019        9.362

Hopefully, turning the CONDUIT ingest back on has not resulted in excessively
long amounts of time to process products out of your LDM queue (check your
~ldm/logs/ldmd.log file for messages).

> Anyway, I'll let you know what my dev data looks like tomorrow.

Very good.

> Thanks.

I apologize for not being of more help on this issue.  One of the things that
we need to do is setup a machine here at the UPC to ingest the set of data
you are ingesting (sans CONDUIT) and see if our setup shows the same "holes"
as yours in Garp.  Our GEMPAK person is out for a week, so this test will need
to wait.


Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
Unidata HomePage                       http://www.unidata.ucar.edu

Ticket Details
Ticket ID: IZJ-689237
Department: Support IDD
Priority: Normal
Status: Closed