[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20020219: realtime FTPs to motherlode.ucar.edu (cont.)



>From: "Kevin Polston" <address@hidden>
>Organization: NOAA
>Keywords: 200202012111.g11LBCx14907 LDM installation

Kevin,

>Well I've got another good news/bad news deal to report.  I'll start
>with the good news.  With regards to the radar data.....I have got
>things configured and radar data is coming in.  :-)  I checked the
>ldmd.log file and it appeared as if it was trying to put the data
>somewhere but in a slightly different directory structure than what I
>had.

Well, the LDM will only FILE things where you tell it to FILE things,
so it was trying to write to exactly where you told it to write :-)

>So I changed my directory structure to match what was being shown,
>then I had to change my group permissions to match LDM and then it
>appeared to be working. Initially on my log file it was saying
>permission denied. I assumed it was because the directory structure
>didn't match and/or the group setting needed to match.

This was my comment in a previous email.  The LDM will FILE things where
you tell it to FILE them _IF_ the user running the LDM has permission
to write to that location.  Evidently, the location where you were
trying to write data had permissions that did not allow the user running
the LDM (typically 'ldm') to write there.

>In any case I got
>it to work and it is coming in. The bad news is that the data has been
>running consistently 45 minutes to an hour behind the actual time. I
>don't understand what the lag could be.

Network congestion between you and your upstream feed site; the data getting
to your upstream feed site late; and/or the data getting to your upstream's
upstream feed site late.

>I stopped and restarted LDM
>thinking that perhaps that might "reset" things to start from the
>beginning but that didn't work. I thought maybe it could be because I
>have too much data coming in but that didn't seem likely as I did not
>have very much set up in my pqact.conf file at the time. So I am unsure
>why.

You may, in fact, have too much data coming in.  Remember, the file that
controls what data will be sent to you is ~ldm/etc/ldmd.conf.  The
file that controls what will happen to the data that you receive is
~ldm/etc/pqact.conf.

>I was going to try and grab the N0R data from regions instead of
>everywhere but I haven't done that yet. I don't know if that would help
>or not. 

The request line for NEXRAD that you included in your ldmd.conf file
(sent in a previous email) showed that you were requesting ALL of the
N0R data:

request NEXRAD "/pN0R" papagayo.unl.edu

This says to request ALL N0R products from papagayo.unl.edu.  Now, given
that pagagayo is getting all N0R products, it means that you should too.

On the issue of getting LOTs of data, your request for NIMAGE data:

request NIMAGE ".*" 129.93.52.150

is also requesting all of that data.  This is all GINI imagery for both
GOES East and West.  This is a LOT of data.  One product, the 1km VIS
images, will be approx. 14 MB, and that product is sent 4 times for
each satellite.  This means that just for this one product, you will
be receiving approx. 112 MB of data each hour.

>On to model data..........
>
>I don't know what the problem is here but for some reason grabbing the
>model data through the ldm is proving to be a problem. As I mentioned
>before I am getting significant "drop-outs" of data when I try to view
>it in NMAP2. Like if I look at a 500mb Height/Vorticity loop, depending
>on the model, as some point there will be no images shown, or just
>vorticity for a few loops or maybe just the height field. What I can't
>understand is why such a basic loop as that would be so jumbled
>up....especially when the size of the data files are quite large! For
>instance this morning.....I tried to look at the 12Z ETA. It was over
>44MB in size yet I couldn't even get to 24 hours before my loop "fell
>apart" (no data was shown after that).

This sounds more like a decoding problem than a data receipt problem.

>I have not yet figured out how to
>get the data to be placed in the proper directory so I have to manually
>move the data there.

Here let me stress that I am no GEMPAK expert.  A quick read of the
LDM pqact.conf actions listed in:

LDM pqact.conf examples
http://www.unidata.ucar.edu/packages/gempak/examples/pqact/pqact.html
  LDMBRIDGE Decoder actions 
  http://www.unidata.ucar.edu/packages/gempak/tutorial/pqact/decoders.tbl

tells me that the GEMPAK table that is used to determine where decoded
grid files are writte in can be found in: $GEMTBL/grid/gribkey.tbl.
From the text example, this may look like:

HDS|NMC2|NOGAPS ^(/u/ftp/gateway|[HOYZ]|US058).*
        PIPE    decoders/dcgrib2 -d data/gempak/logs/dcgrib.log
        -e GEMTBL=/home/gempak/NAWIPS/gempak/tables
#

Looking on our system, I can find the file:

/home/gempak/NAWIPS/gempak/tables/gribkey.tbl

and it contains patterns for where/how to write decoded grid files:

more /home/gempak/NAWIPS/gempak/tables/grid/gribkey.tbl
!center sub modelid grid   Output grid file name                   max_grids
!
! ETA
007   x   084,085   212    data/gempak/model/YYYYMMDDHH_eta@@@.gem   12000
007   x   084,085   216    data/gempak/model/YYYYMMDDHH_eta@@@.gem    6000
007   x   084,085   104    data/gempak/model/YYYYMMDDHH_eta@@@.gem   12000
007   x   084,085   @@@    data/gempak/model/YYYYMMDDHH_eta@@@.gem    2000
!
! ETA/ICWF
007   x   089       @@@    data/gempak/model/YYYYMMDDHH_icwf@@@.gem   1000
!
! NGM
007   x   039       @@@    data/gempak/model/YYYYMMDDHH_ngm@@@.gem    2000
!
! RUC
007   x   105       @@@    data/gempak/model/YYYYMMDDHH_ruc@@@.gem    7500
007   x   086       @@@    data/gempak/model/YYYYMMDDHH_ruc@@@.gem    2000
!
! AVN grids

So, it seems that this file is the key to your decoding.

>I know this is not what we want to do. In my
>pqact.conf file when I add the line that should tell it where to go I
>get errors in my log file (it says "write error") and nothing is saved. 
>Here is an example of how I had it set up:
>
>
># Select any/all grids desired from [QAIJH]
>HRS    ^[YZ].[H]... KWB. ([0-3][0-9])([0-2][0-9]).*(/mAVN|/mSSIAVN)
>       PIPE    /usr1/nawips/exe/linux/dcgrib2 -d
>               data/gempak/logs/dcgrib.log
>               -e GEMTBL=/usr1/nawips/gempak/tables
>               /usr1/nawips/metdat/gempak/grids/YYYYMMDDHH_avn213.gem
>#
>
>The last line is the line I added in and it seems to me to be correct
>but as I said...it causes errors.

Again, I am out of my depth here, but your line "appears" to be OK (I
can't tell if you have the appropriate tabs from this listing.  Sufficit
to say that each line in the pqact.conf action (after the one that betins
with the HRS) must start with a tab.

What are the errors you see?

>When I take that line out the data
>comes in and is stored but in the the wrong place.

It will be stored in the directory indicated in the gribkey.tbl file.

>So my problem and frustration with the model data is this - I can't
>figure out how to save it in the proper directory even though I am
>following the examples from Chiz's page and substituting my directory
>structure in its place, and the data seems to be incomplete or displays
>incomplete despite the large size of the data!
>
>This leads me into my surface and upper air decoding problems. I am
>trying to do just get basic surface and upper air data and decode
>it....however...I am getting a slightly different problem. I am doing ti
>similarly to the model data where at the end I am appending the line
>that tells it where to store the data and in what format. Here is an
>example from the log file as to what it is saying:
>
> Feb 18 18:00:38 localhost pqact[1019]: pipe_dbufput:
>/usr1/nawips/exe/linux/dcmetr-b9-m72-s/usr1/nawips/gempak/tables/stns/sfstns.t
> bl-p/usr1/nawips/gempak/tables/pack/metar.pack-ddata/gempak/logs/dcmetr.log-e
> GEMTBL=/usr1/nawips/gempak/tables/usr1/nawips/metdat/gempak/surface/YYYYMMDD_
> surf.gem

This appears to be saying that the decoder being used is:

/usr1/nawips/exe/linux/dcmetr-b9-m72-s

Is this what you intended?

As a comparative example, here is the METAR and SPECI) decode entry from
the pqact.conf file on one of the machines that we run:

#
# METAR ( and SPECI) surface observations
#
DDS|IDS ^S[AP].* .... ([0-3][0-9])([0-2][0-9])
        PIPE    decoders/dcmetr -b 9 -m 72 -s sfmetar_sa.tbl
        -d data/pub/decoded/gempak/logs/dcmetr.log
        -e GEMTBL=/home/gempak/NAWIPS/gempak/tables
        data/pub/decoded/gempak/surface/YYYYMMDD_sao.gem

(again, mind where there are supposed to be tabs for white space!).

Here, Chiz has copied the GEMPAK 'dcmetr' decoder to the directory
~ldm/decoders.

>Here is the line from the log file that talks about the upper air
>decoding. Looks like the same thing.

>Feb 18 12:36:40 localhost pqact[1019]: pipe_dbufput:
>/usr1/nawips/exe/linux/dcuair-b24-m16-ddata/gempak/logs/dcuair.log-eGEMTBL=/us
> r1/nawips/gempak/tables-s/usr1/nawips/gempak/tables/stns/snstns.tbl/usr1/nawi
> ps/metdat/gempak/upperair/YYYYMMDD_upa.gem
>write error 

The write error might indicate that pqact was not able to write the product
to stdin of the decoder (in this case I would check to make sure that
the decoder reference is correct), or that the decoder is unable to write
its output to the particular directory (the permission thing again).

>Not only does this not decode or get written properly but it also starts
>eating up a huge amount of resources on my machine. I have to go in and
>kill off those processes and then the machine gets back to normal. So
>obviously I have those lines commented out for now. 
>
>I also added a bunch of lines in my pqact.conf file to try and get text
>data. (I was feeling bold after my success in getting the radar data to
>store). Well I have the lines in the pqact.conf file set up properly but
>here is what I am getting from the log file. It could be for any
>product...not just this one I am showing you:
>
> Feb 18 12:36:41 localhost pqact[1019]: unio_open:
>/usr1/nawips/metdat/nwx/pubprod/area/2002021812.area: Permission denied

This looks like a directory permission thing.

>Now I have double checked my directory structure and it is correct.

Have you triple checked the permissions on all of the various directories
that you are asking your LDM and LDM run GEMPAK decoders to write to?
One quick way of testing the ability of the LDM to write to a directory
is to:

<login as the user running your LDM>
touch directory_to_test/xxx

If the file xxx is created in the directory 'directory_to_test' (fill
in the appropriate directory), then the LDM will be able to write to
that directory as well.  If not, then you need to work on directory
permissions.

>You
>can also see I have it set up to write as YYYYMMDDHH_area. That seems to
>be working. However....I always get the "permission denied" error. Well,
>as I said I went through and set my configurations as gempak :users (for
>my owner and group names, ldm is set as   ldm :users).  So I am at
>somewhat of a loss here. 
>
>Now onto the satellite data.  The two examples I had in my pqact.conf
>file trying to display the McIdas files ..I put those in there because I
>saw them in Chiz's examples. I believe I have that set up correctly but
>I do not have the SATANNOT or SATBAND files I need.

If you grabbed the ldm-mcidas distribution to get the pnga2area decoder,
then you have the SATANNOT and SATBAND files.  If you didn't grab
the ldm-mcidas package, then where did you get the pnga2area decoder?

>I don't have them on
>my machine either. So that is why they aren't woking.  What I am waiting
>on is the routine to uncompress the "regular" imagery.  I would like to
>give that a try and see if I can get that working but I guess I need to
>wait on you to send the info on what I have to do.

Regular meaning the NOAAPORT GINI imagery.  I will touch base with Chiz
on that tomorrow.

>I will probably set
>up the McIdas data to look at the CAPE, LI and Precipitable Water images
>to go along with the other imagery. But I would like to try and get the
>vis, ir and wv imagery set up now.....like what I am downloading from
>motherlode. 

Like I said, if you grabbed the ldm-mcidas distribution (either binary
or source) you will have the SATANNOT and SATBAND files.  Once you find
them, copy them to the ~ldm/etc directory (do the copy as the user 'ldm').

>Finally, I realize I need to have something set up in my ldmd.conf file
>to acquire the profiler data. As you can probably tell from my
>pqact.conf file I have the lines in there ready to go to decode the
>profiler data but I guess I need to download it first in order to decode
>it!  :-) What would I add to the ldmd.conf file to start getting the
>profiler data to come in?

Change:

request DDPLUS|IDS|HRS ".*" papagayo.unl.edu

to:

request DDPLUS|IDS|HRS|FSL2 ".*" papagayo.unl.edu

in ~ldm/ldmd.conf and then stop and restart the LDM.

>I have added the line to the ldmd.conf file to ALLOW   ANY to come in.
>Check to see if it is correct please...I was unsure whether to type it
>in just as you had showed or as the example already in the file. 

I still can not: ping, ldmping, do a notifyme to mkc-65-26-24-74.kc.rr.com.
I can do a traceroute to the machine; the one I ran took 27 hops for
me to get to you from papagayo.  This is no promising.

>Well, thats where I stand now Tom. I felt pretty good after I got the
>radar data to store in the proper directory, and thought I could get the
>other data to do the same but unfortunately not yet. So I am going to
>keep trying and I hope you will have some light to shed on all this.
>
>Before I go I wanted to ask you something. You mentioned you were the
>McIDAS guy there.

Yup.

>Did you ever work with or were involved with VDUC?

I have had contact with them through my connections with the SSEC McIDAS
Users Group.  VDUC is now SATEPS, isn't it?

>I
>am guessign you probably were. When I was interning at SELS back in the
>early-mid 1990's I got pretty proficient at using and understanding VDUC
>which I guess is basically McIDAS.

Right.

>I was upset to see them abandon it
>in favor of gempak and N-AWIPS (even though these are nice systems I
>still would prefer VCUC/McIDAS). I especially liked the roam/zoom
>cabability and the easy way to toggle data/images on or off. There were
>actually very many features I liked. Is anyone other than universities
>using McIDAS anymore?

Yes, it is in use world wide by a variety of users:  EUMETSAT, Spain,
ABoM (Australia Bureau of Meteorology), SATEPS (NOAA), TASC, etc., etc.,
etc.

>It seems like alot of universities are going to
>gempak.

Yes.  Once GEMPAK is setup and decoding is running, GEMPAK is pretty
easy to use (through NMAP/NMAP2/GARP).  It is also very powerful for
processing gridded data.  McIDAS' forte is satellite imagery.

>Heck, I wouldn't mind setting up McIDAS here and using
>it......but I will save that thought for another day.  :-)

Well, now you are talking my language.  I personally thing that setting
up McIDAS is pretty straight forward.  The very cool thing about McIDAS
is that _none_ of the data needs to be held locally.  McIDAS ADDE
allows you to point at cooperating servers anywhere on the net and
start displaying/analyzing/etc.  This is one of McIDAS's real strengths.
ADDE has allowed a couple of Unidata sites to trash their LDM ingest
of data and go to simply accessing things from remote sites.  All-in-all
this is pretty easy to do.

>P.S.  I just ran the command you mentioned to check for the last hours
>worth of radar data.....it seems the upstream feed site is running an
>hour behind the actual time.

Be careful to not misinterpret the information coming back from notifyme.
The invocation I sent:

notifyme -vxl- -f NEXRAD -h papagayo.unl.edu -o 3600

asks papagayo to tell you about the NEXRAD products its has received
in the past 3600 seconds (1 hour).  Virtually all LDMs are setup to
keep one hour's worth of data in their queues, so this command _should_
show you data that is an hour old to begin with.  If you let the
command run, eventually you will see data as it is received.  AT that
point you can determine if the data feed to papagayo is late or on time.

>Is this always like this? Its not that
>useful if it is an hour behind the actual time....

No, it is definitely not always like this.  I just checked and see
that the site that papagayo is feeding NEXRAD data from is running
behind.  The data getting to you has virtually no delay from papagayo
and its feeder.  The question is why is papagayo's feeder running
late?  I can't answer this.

>Also, the problems I mentioned in my last e-mail I have solved (ie, the
>datatype.tbl problem, also starting ldm from the ldm directory, the RUC
>data is coming in, etc).

OK.

As a final word, the tactic I would take in getting your ingestion/decoding
working correctly is to take one thing at a time.  To me this means
commenting out the intest of NIMAGE data (from ldmd.conf) for right now
and concentrating on the decode of model and point data by GEMPAK.
That way, you can be assured that receipt of data is not what is causing
your problems.

Lastly, figuring out how we could get onto your machine would really
help in getting things setup on your end.

Tom