[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20020308: setting up request lines in ldmd.conf (cont.)



>From: "Kevin Polston" <address@hidden>
>Organization: NOAA/NWS
>Keywords: 200203050151.g251p7K03671 IDD

Kevin,

>Before I did a reboot (this after running ldmadmin stop) I did a  ps -u
>ldm  to see what processes were still running. The only things that it
>said were a "csh" and a "ps"....which was my terminal window I had open
>under user ldm. So I think I was ok there. There were no rpc.ldmd
>processes running when I rebooted.

OK, that is the correct procedure.  I can't explain why the NIMAGE
data stopped getting filed without more information.  For instance,
the ldmd.log file that was being written for that period might shed
some light on what was going on.  There may well have been some system
resource that was unavailable, and the reboot cleared that up.

>I noticed this morning that all data coming in was timely and looking
>good.  However, shortly after 9am I noticed the radar data was starting
>to lag behind again.....by roughly 30 minutes.  The satellite data still
>seemed timely as well as the surface obs and profilers. I don't
>understand why the radar data would lag behind after it  was doing fine
>when I first checked this morning.

This was probably due to network congestion.  The number of NEXRAD
Level III products you are requesting:

request NEXRAD "/pN0R|NVL|NCR|N0V|N0S" papagayo.unl.edu

may be too much for your network connection.  Do you need all of this
data?

Along those lines, and unbeknownst to you, we have been including a
1 km national N0R composite in the NIMAGE datastream for about a
week.  This has not been generally announced, but will be probably
the week after next.  The composites are being produced by a GEMPAK
process (a routine written by Chiz that uses GEMPAK facilities) about
every 10 minutes or so (the timing is not exact).  The product is
being delivered as a PNG compressed GINI image, and it can be
unpacked using the readpng routine that you are using for the
other images in the NIMAGE stream.

In addition to the 1 km national N0R composite, we are sending
a 6 km national N0R composite, a 1 km regional/floater N0R composite
and a 10 km national RCM composite in the FNEXRAD datastream.
The set of these images demonstrates to the user the extremes in
composite NEXRAD imagery.

Here is more complete list of the products being sent:

Product                    Decoded file size  Frequency    Feed
--------------------------+------------------+------------+--------
10 km national RCM           166880 bytes     30      min  FNEXRAD
1 km regional floater N0R    361280 bytes     7 - 10  min  FNEXRAD
6 km national N0R            401280 bytes     7 - 10  min  FNEXRAD
1 km national N0R          14208533 bytes     10 - 15 min  NIMAGE

The following are some _example_ pqact.conf actions that might be
used to uncompress and write the decoded products:

#########################################################################
#
# FNEXRAD section
#

# NEXRCOMP 6 km National BREF mosaic
FNEXRAD ^pnga2area Q5 (RL) (.*) (.*) (.*) (.*) (........) (....)
        PIPE    -close
        decoders/pnga2area -vl /usr/local/ldm/logs/ldm-mcidas.log
        /data/ldm/gempak/nexrad/NEXRCOMP/6KN0R-NAT/\4_\6_\7

# NEXRCOMP 1 km Retional BREF mosaic
FNEXRAD ^pnga2area Q5 (RM) (.*) (.*) (.*) (.*) (........) (....)
        PIPE    -close
        decoders/pnga2area -vl /usr/local/ldm/logs/ldm-mcidas.log
        /data/ldm/gempak/nexrad/NEXRCOMP/1KN0R-FLT/\4_\6_\7

# NEXRCOMP 10 km National RCM mosaic
FNEXRAD ^pnga2area Q5 (RN) (.*) (.*) (.*) (.*) (........) (....)
        PIPE    -close
        decoders/pnga2area -vl /usr/local/ldm/logs/ldm-mcidas.log
        /data/ldm/gempak/nexrad/NEXRCOMP/10KRCM-NAT/\4_\6_\7

#########################################################################
# NOAAPORT Actions
##
NIMAGE  ^.*/ (TI[^GC]...) KNES ([0-3][0-9])([0-2][0-9])([0-5][0-9])
        PIPE    -close
        util/readpng -n -l logs/png.log
        data/gempak/nport/tmp/\1_(\2:yy)(\2:mm)\2_\3\4


As a neophyte power LDM user ;-), your mission, should you choose to
accept it, is to take these actions and add modified versions to your
own pqact.conf file to write out the images in a manner useful to your
operations.  My only direction would be to log to different log files
if you do not already use the ~ldm/lots/png.log and/or
~ldm/logs/ldm-mcidas.log files.

After you start using the 1 km national composite, you may well want
to back off on the individual NEXRAD stations that you ingest.  This
would help keep disk usage down and free up network bandwidth.

>As far as manually removing the files......  I have implemented the
>scouring program for both the radar and satellite data. When I deleted
>the data I checked my disk space and it was still unusually high. I
>couldn't figure out why it was so high. Well I finally discovered why.
>The directory that has the AA, AF, BB.....ZC,ZD,  etc directories ( I
>believe it is the ~ldm/data directory) is filling up with stuff.   I
>was under the impression that the data was being written to my $METDAT
>directory (which it is now that I have the pqact.conf file set up
>properly) but apparently there was also data going into all of these
>sub-directories and not getting scoured. I went through and cleaned
>these files out (I did not know if this was a good or bad thing to do
>but I did it anyway since my disk space was almost used up). It did not
>seem to have a negative effect on data ingestion, display or otherwise.
>Plus I figured it would start writing to it again if there was a
>problem. But it cleared up just about all the space that had been used.
>Because I had about 20GB partitioned for my data and it was almost used
>up. I was thinking there is no way I am using that much space.  So,
>that is where that problem was.

This looks like the example default pqact.conf action sent out with
the LDM is still in place.  That action would file every product into
a hierarchy like you note.

A quick check of your current pqact.conf file shows that you do have
this action in it:

####
# Bin all the (Non-GRIB) WMO format data, using elements from the
# identifier as path components. The minutes portion of the timestamp,
# and the retransmit code is ignored. The day of the month portion
# of the timestamp is not used as a path component, so it would be
# a good idea to run 'scour' on a less than 24 hour basis.
#
#  "ASUS42 KRDU 012259" gets filed as
#  data/US/KRDU/22/AS42.wmo
#
WMO     ^([^H][A-Z])([A-Z][A-Z])([0-9][0-9]) (....) ([0-3][0-9])([0-2][0-9])
        FILE    data/\2/\4/\6/\1\3.wmo

I strongly recommend that you comment this action out; verify your
pqact.conf file integrity with 'ldmadmin pqactcheck' and then tell
pqact to reread your pqact.conf file with 'ldmadmin pqactHUP'.  This
will stop this filing thereby saving you disk space AND I/O cycles
on your machine and increase the usefulness of your LDM setup.

>Finally, regarding the RUC data. In my pqact.conf file where I
>configure the RUC to grab a certain grid....there is a commented line
>that says "only the 211 grid is available for the RUC".  I guess the
>211 grid does not have CAPE or Helicity.

That is my observation after looking at McIDAS-XCD decoded grids.
All CAPE and Helicity grids appear to be on grid 236.

>I tried changing that to a
>different grid (based on the examples for the ETA) and of course, no
>new RUC data came in. So my question is this.....is it possible or will
>it be possible in the near future to acquire another ruc grid that does
>have cape and helicity on it?

I guess I don't understand you.  The parameters you want are coming in,
and you can decode them into GEMPAK grid files and then use them.

>In the meantime I started up my
>cronmaster again to just download the ruc until I can get the right
>grid through ldm.

There should be no difference in the datastream feeding the process
creating the grid files you are FTPing and the one that you are
getting.  I just verified that your upstream host is requesting all
grids from its upstream host.  I need to verify that papagayo's
upstream host is requesting all of the grids from its upstream host,
but I think that it is.

>Interesting observations here this morning. Up here at MCI we are 48
>over 47 with a light north wind while at the downtown airport (which is
>probably only 10-15 miles away at most) is 62 over 54 with a south
>wind. I suspect this boundary might retreat a little further north
>today but when the low swings out this evening then it should trek
>right up the boundary. So hopefully we'll see some active weather here
>by this evening. Only negative so far is all the cloud cover.    Also,
>you might not have seen this but last night there was a pretty good
>storm up by St.  Joseph....looked like it had some supercell
>characteristics and it did drop some golfball size hail.   :-)

I was more interested in the weather we were experiencing while we were
skiing yesterday.

Since I am currently at home, I will have to look into the HRS feed
to papagayo a little later when I get into work.

Tom