[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Datastream #IZJ-689237]: Additional Datafeeds



Hi Jeff,

re:
> I'm curious about how we could go about expanding our datafeeds that
> receive.  It appears that, when using Gempak/Garp, we have some holes in
> our data. I've attached a text file that includes the Request section
> from our ldmd.conf file.  Our LDM machine is whistler.creighton.edu.

I have included your ~ldm/etc/ldmd.conf entries for reference in this reply:

###############################################################################

# Request Entries

###############################################################################

#

# LDM5 servers request data from Data Sources

#

#       request <feedset> <pattern> <hostname pattern>

#

#request WMO ".*" uni0.unidata.ucar.edu

#request FSL ".*" weather.admin.niu.edu

request NLDN ".*" natasha.creighton.edu

#  JMS:  I modified the next line, using information I got from:

#        
http://www.unidata.ucar.edu/software/mcidas/2005/users_guide/NNEXRAD.CFG.txt

#        06/13/2008

request NNEXRAD "/p(N0R|N1R|N2R|N3R|N0S|N1S|N2S|N0V|N1V|NCR|NVL|NTP|N1P)(...)" 
f5.aos.wisc.edu

#request WSI ".*" natasha.creighton.edu

request UNIDATA ".*" idd.unl.edu

request NIMAGE "(TIG[EW]0[1-5])" idd.unl.edu

### 9-16-08: Host below is retired. I commented it out - Jeff

#request WSI ".*" iita.rap.ucar.edu 

request NMC2 ".*" idd.unl.edu

#request UNIDATA ".*" weather.admin.niu.edu

#request DIFAX ".*" weather.admin.niu.edu

> Any help would be appreciated.

Some questions:

- you note: "when using Gempak/Garp, we have some holes in our data"

  What "holes" are you experiencing?

- you are ingesting the full CONDUIT (aka NMC2) datastream.  Do you
  really want all of the products in CONDUIT?

  The reason I ask is twofold:

  - CONDUIT has the highest volume of all IDD datastreams:

    
http://www.unidata.ucar.edu/cgi-bin/rtstats/rtstats_summary_volume?whistler.creighton.edu

    Data Volume Summary for whistler.creighton.edu

    Maximum hourly volume   5198.110 M bytes/hour
    Average hourly volume   2105.389 M bytes/hour

    Average products per hour     107803 prods/hour

    Feed                           Average             Maximum     Products
                         (M byte/hour)            (M byte/hour)   number/hour
    CONDUIT                1644.692    [ 78.118%]     4641.234    46853.848
    HDS                     204.037    [  9.691%]      423.234    17809.000
    NNEXRAD                 113.953    [  5.412%]      169.715    14990.283
    NIMAGE                   92.623    [  4.399%]      167.883       25.326
    IDS|DDPLUS               30.970    [  1.471%]       39.794    28056.326
    UNIWISC                  19.060    [  0.905%]       28.367       59.261
    NLDN                      0.052    [  0.002%]        1.494        8.978

  - the latency plot for your CONDUIT data ingestion shows high latencies
    every six hours:

    
http://www.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+whistler.creighton.edu

- what is the size of your LDM queue?

  Please send us the output of 'ldmadmin config'.

- what are your objectives?  For instance, is one of the holes you are 
experiencing
  related to you not ingesting NEXRAD Level II data.

Comments:

- if you really do want all of the CONDUIT data, the high latencies you are
  experiencing in your CONDUIT ingestion might be mitigated by splitting your
  feed request into fifths.  For instance:

  try changing:

  request NMC2 ".*" idd.unl.edu

  to:

  request CONDUIT "([09]$)" idd.unl.edu
  request CONDUIT "([18]$)" idd.unl.edu
  request CONDUIT "([27]$)" idd.unl.edu
  request CONDUIT "([36]$)" idd.unl.edu
  request CONDUIT "([45]$)" idd.unl.edu

  (the operative part is the five-way split of the request, not the changing
  of 'NMC2'to 'CONDUIT')

  Please remember that you will need to stop and restart your LDM after
  making changes to ~ldm/etc/ldmd.conf:

  <as 'ldm'>
  ldmadmin restart

- one datastream that you are not currently ingesting is NGRID.  NGRID contains
  the high resolution model output broadcast in NOAAPort.  It could be that
  some of the "holes" you are seeing in your data are due to products you
  are not getting because you are not requesting NGRID.

  By the way, I do _NOT_ think that this is the cause of the "holes" you are
  reporting.

- you could greatly simplify your NNEXRAD request line (this will not change
  latencies since they are low for NNEXRAD as it is):

  change:

  request NNEXRAD 
"/p(N0R|N1R|N2R|N3R|N0S|N1S|N2S|N0V|N1V|NCR|NVL|NTP|N1P)(...)" f5.aos.wisc.edu

  to:

  request NNEXRAD ".*" f5.aos.wisc.edu

  Your existing request is for 99.9% of NNEXRAD, so you might as well ingest 
all of
  it :-)

My best guess as to why you are experiencing "holes" in your data is one or both
of the following:

- your LDM queue is too small to handle the data you are ingesting from the IDD
  (this is why I asked about your LDM queue size above)

- your machine is bogged down processing the data you are ingesting

To see which of these two possibilities is true, we will need to:

- see the processing actions ('exec' lines) from your ~ldm/etc/ldmd.conf file

- know the size of your LDM queue

- know if you are seeing unusual messages in your ~ldm/logs/ldmd.log file
  (messages that indicate that your machine is way behind in its processing)

- get an idea of the processing power of your machine (e.g., type of 
processor(s)
  including if they are 32-bit or 64-bit; how much memory you have; how much
  disk space you have and how much of it is free; etc.)

> Thank you.

No worries.

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: IZJ-689237
Department: Support Datastream
Priority: Normal
Status: Closed