[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20021112: spotty data in the MDR products (cont.)

>From: Jim Koermer <address@hidden>
>Organization:  Plymouth State
>Keywords:  200211121541.gACFftL21487 IDD Unidata-Wisconsin MDR


I am including a CC to David in this reply since I discuss some
LDM-related issues that he should find interesting.

re: raw MDR data vs UNIWISC MDR images
>I'm not sure how the MDR images in the Unidata-Wisonsin datastreme are
>going. I was referring to the raw MDR data over the WMO IDD feed.


>data are just about always 100% complete on NOAAPORT, but once or twice
>a day don't show up at all on IDD or we only receive about half as much
>data as there should be available.

The raw MDR data are text bulletins aren't they?  Do you know right off
what their product ID looks like?

re: These are small products, so you would expect better reliability.

The current version of the LDM uses a blocking RPC call to deliver
data.  What this means is that an acknowledgement of receipt of a
product (or piece of a product if the product is larger than 16384
bytes) in a stream must be received before the next product can be
sent.  When the latency between the upstream feed site and the client
grows, the number of products that can be delivered in a fixed period
of time decreases.  This exact situation is what limits the current LDM
from being able to reliably send lots of small products to LDMs that
are far away (or have high latencies).  The real time statistics
reporting of LDM 5.2+ gives us the information to know if your data
ingestion is suffering from the high latencies that can cause loss of

If your receipt of the textual data in the IDS|DDPLUS stream is
exhibiting high latency, the situation can be worked around by
splitting the feed request into multiple, equal-by-number-of-product
parts.  The feed splitting essentially cuts the number of products on
any one rpc.ldmd by the number of equal splits that are made.  I used
this technique to setup a reliable feed of IDS|DDPLUS to a machine at
the University of Para in Belem, Brazil.  Without the split, the
latencies would grow to exceed the 1 hour queue maintained by the
upstream host, so products would be lost.  With the split, all products
would be delivered with little or no latency.

I recount this story since it may offer a way to reconfigure your LDM
requests that will provide you with more reliable reception of the
IDS|DDPLUS products.  I also wanted to let you know that we are working
on a new version of the LDM that should make such feed splitting
unnecessary.  Since that version of the LDM will not be ready for prime
time for awhile, I think that your best course of action is to try out
the feed splitting techinique.

As soon as you can get LDM 5.2.2 running on your machine, we can start
getting your real time stats and then make the determination if
splitting your IDS|DDPLUS will solve your data reception problems.  I
have to say that I believe it _will_ since the characteristic of being
able to reliably receive big products, but not small ones is a direct
indicator of the need to split the feed the small products are being
sent in.

In order to report realtime statistics, you will need to add the
following line to your ~ldm/ldmd.conf file after you upgrade to

exec    "rtstats -h rtstats.unidata.ucar.edu"

You must also make sure that the '@HOSTAME@' field in ~ldm/bin/ldmadmin:

$hostname = "@HOSTAME@";

is modified to be the fully qualified hostname of the machine running
your LDM.  For instance, if you are running the LDM on pscwx, the
ldmadmin line would look like:

$hostname = "pscwx.plymouth.edu";

If this is not done, rtstats won't be able to tell us who is reporting
the statistics.

If we get to the point of splitting your IDS|DDPLUS feed, the ldmd.conf
lines will end up looking something like:

request DDPLUS|IDS      "^[^S]" atm.geo.nsf.gov
request DDPLUS|IDS      "^S[AR]" atm1.geo.nsf.gov
request DDPLUS|IDS      "^S[^AR]" atm2.geo.nsf.gov

These requests effectively split the number of IDS|DDPLUS products into
thirds.  Each request is sent to an alias for the machine that is
acting as the upstream feed site.  The aliases are created by making
entries in the /etc/hosts file and insuring that 'files' are searched
when trying to resolve a hostname.  On a number of systems, this is
done in the /etc/nsswitch.conf file. The entry for hosts looks
more-or-less like:

hosts:      files nisplus nis dns

With this entry, the /etc/hosts file will be checked for hostname/IP
address information first.  The ordering and/or existence of the other
services (nisplus, nis, dns) depends on what things a site uses so
it may not match your setup.

An entry in /etc/hosts that defines multiple aliases for a machine
named atm.geo.nsf.gov would look something like:  atm.geo.nsf.gov atm1.geo.nsf.gov atm2.geo.nsf.gov 
atm3.geo.nsf.gov atm4.geo.nsf.gov

(tabs are important!)

This line defines 5 names by which the machine at can
be referred.

We can get into the above in more detail as soon as we determine if
your reception is suffering from latency created data loss.

>What is perplexing is that I have been getting fairly reliable NEXRAD
>composite data and McIDAS area file data. This is why I raised the small
>file versus the large file point. Since our previous IDD problems always
>seemed to be on the large filesize products.

The overly verbose section I included above can shed some light on
this situation.

re: source distribution of LDM 5.2.2
>We would probably need the source anyway with our FreeBSD systems as I
>think has been the case in the past.


>We'll try to get it up and running before the end of next week.


>I don't think the problem is with the LDM,
>but with the network congestion.

Network congestion can add to your latencies.  If it does, the current
way the LDM works can cause product loss.

Again, we are deep in the middle of development of the current LDM that
should eliminate this situation as much as possible.  The objective is
for the LDM to be able to send as much data as your network connection
will allow.  Of course, there is no magical solution to a network
connection that is full.

>We are still waiting on your alma mater
>to get our dedicated Abilene link up and running, but progress has been


By the way, I want to thank you again for the web help you provided to
the University of South Florida.  I helped them get your examples working
and add some new twists like displaying a product-specific data BAR
on top of the images (e.g., brightness for VIS images; standard temperature
bar for non-WV IR images; and a WV-specific BAR for WV images).