[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[McIDAS #XRR-599672]: Configuring scouring of McIDAS data



Hi Samuel,

re:
> I know I ran mcxconfig.  Several times in fact.

This is strange given that XCDDATA and MCDATA were not defined in the McIDAS 
string
table.  One of the first steps made by mcxconfig is to define these McIDAS 
strings.

> I think I had the environment setup correctly.  The only thing I noticed 
> after the
> fact was that the ADDE 'DSSERVE' data sets refered to the absolute path 
> /data/ldm/...,
> so I created the directories /data, and /data/ldm, and made a symbolic link 
> from
> /data/ldm/mcidas to /var/data/ldm/data/mcidas, per the recommended ldm 
> instructions
> on where to place the ldm data if you do not wish it to be backed up.

OK.  Symbolic links will work fine.

> Unfortunately I followed your latest steps to a T, and I'm not seeing any 
> improvement.
> 
> (as mcidas)
> gribadmin latest
> 
> (shows a blank pair of lines.)

Hmm...

> XCD_START.LOG shows DMBIN, and DMGRID have gone for a loop, trying to find 
> their pointer files.

OK.

> P.S. I'm going home for the night.  If you want to log in, the passwords are 
> the same for
> shu1, and shu. If not, I'll see you tomorrow (we can pick it up then)

OK, I'm on.

Here is what I found:

- in my last email, I had instructions for how to create the McIDAS string 
MCDATA:

te.k MCDATA \"/home/mcidas/workdata

  I found that MCDATA had been set to /home/data/ldm/mcidas.  I don't think 
that this would cause
  a problem, but it is non-standard so I changed it by running the te.k 
invocation above.

- after redefining MCDATA, I reran the XCD.BAT BATCH script:

<as 'mcidas'>
cd $MCDATA
batch.k XCD.BAT

- next, I decided to see if the spool file for GRIB data was being written to:

<still as 'mcidas'>
dmap.k HRS.SPL
PERM      SIZE LAST CHANGED FILENAME DIRECTORY
---- --------- ------------ -------- ---------
-rw-        12 Jun 19 18:04 HRS.SPL  /home/ldm/data/mcidas
12 bytes in 1 files

  NOTE that the file exists, but is only a few (12) bytes in size!

- since HRS.SPL is not be updated with new GRIB messages, I decided to take a
  look at the 'pqact' lines for McIDAS decoding in ~ldm/etc/ldmd.conf.  I see
  that _no_ feeds containing model data are included in the McIDAS-related
  entries:

# McIDAS-XCD entries
exec    "xcd_run MONITOR"
#
exec    "pqact -f IDS|DDPLUS|FNEXRAD|NIMAGE etc/pqact.conf_mcidasA"
exec    "pqact -f FSL2|NLDN|NNEXRAD|UNIWISC etc/pqact.conf_mcidasB"

- a quick look in ~ldm/etc/pqact.conf_mcidasA shows that the file does have the
  necessary lines to run the GRID/GRIB decoder:

#########################################################################
#
# McIDAS-XCD section using 'xcd_run' wrapper for GRIB data
#
# NOTEs:
#        - copy 'xcd_run' from ~mcidas/bin directory to ~ldm/decoders 
#        - edit 'xcd_run' and review settings
#        - make sure that ~ldm/decoders is in the PATH for 'ldm'
#        - stop and restart the LDM to activate
#
#########################################################################

# NOAAPORT-broadcast model output
HRS|NGRID       .*      PIPE
        xcd_run HRS

Conclusion:

- no GRID/GRIB decoding is occurring because there is no data being sent to
  the decoding processes

  The GRID/GRIB pointer files are not being created because there no GRIB
  data has been sent through the decoders.  The decoders (DMGRID for GRIBDEC.PRO
  and DMBIN for GRIBFILER.PRO) create the pointer files if they exist when the
  first GRIB message is received

Solution:

- I modified the first ~ldm/etc/ldmd.conf 'exec' line for McIDAS processing by
  adding in the HRS datafeed:

exec    "pqact -f IDS|DDPLUS|HRS|FNEXRAD|NIMAGE etc/pqact.conf_mcidasA" 
                             ^^^

  After restarting the LDM, GRIB data are now being processed:

<as 'mcidas'>
cd $MCDATA
ls -alt GR*.PRO
-rw-rw-r-- 1 ldm ldm 4 Jun 19 18:30 GRBFILER.PRO
-rw-rw-r-- 1 ldm ldm 4 Jun 19 18:30 GRIBDEC.PRO

[mcidas@localhost workdata]$ gribadmin num

  Model Number   % of 
  Name  of Grids Total
  ===== ======== =====
  AWC          2   <1%
  GFS        621   97%
  WSR2         5   <1%
  x            6   <1%
  ----- -------- -----
  Total      634

[mcidas@localhost workdata]$ grdlist.k RTGRIBS/ALL NUM=10
Dataset position 1      Directory Title= /GFS.96.2008171.1800.120.202.gri
PAR  LEVEL      DAY          TIME     SRC  FHR  FDAY         FTIME    GRID  PRO
---- ---------- ------------ -------- ---- ---- ------------ -------- ----- ----
T      100 MB   19 JUN 08171 18:00:00  GFS  120 24 JUN 08176 18:00:00   N/A PS
Z      700 MB   19 JUN 08171 18:00:00  GFS  120 24 JUN 08176 18:00:00   N/A PS
Z      250 MB   19 JUN 08171 18:00:00  GFS  120 24 JUN 08176 18:00:00   N/A PS
T      500 MB   19 JUN 08171 18:00:00  GFS  120 24 JUN 08176 18:00:00   N/A PS
U      300 MB   19 JUN 08171 18:00:00  GFS  120 24 JUN 08176 18:00:00   N/A PS
Z      300 MB   19 JUN 08171 18:00:00  GFS  120 24 JUN 08176 18:00:00   N/A PS
T      200 MB   19 JUN 08171 18:00:00  GFS  120 24 JUN 08176 18:00:00   N/A PS
U      500 MB   19 JUN 08171 18:00:00  GFS  120 24 JUN 08176 18:00:00   N/A PS
U      200 MB   19 JUN 08171 18:00:00  GFS  120 24 JUN 08176 18:00:00   N/A PS
V      700 MB   19 JUN 08171 18:00:00  GFS  120 24 JUN 08176 18:00:00   N/A PS
Number of grids listed = 10
GRDLIST - done


Important NOTE:

- I notice that no McIDAS data scouring is being done in the 'ldm' account
  (i.e., no scouring entries in 'ldm's cron file).  This needs to get addressed
  soon as you get the chance.

re:
> Just as a side note, I have also gone thru the steps to setup the ADDE server
> (http://www.unidata.ucar.edu/software/mcidas/current/users_guide/InstallingtheMcIDAS-XADDERemoteServer.html),
> and opened up the firewall for port 112 outside access.  The ADDE server 
> seems to work on the
> localhost (158.83.1.171-shu1), but if I try to access ADDE from another 
> machine (like using McIDAS's
> "DATALOC ADD" command), I get "pipe read: Connection reset by peer", when 
> trying to access the dataset.

Hmm...

I think that this is related to the fact that the machine name is not yet 
defined:

$ hostname
localhost.localdomain         <- the default

$ cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       localhost.localdomain   localhost
::1     localhost6.localdomain6 localhost6

This is a guess:  when the request is made to shu1's IP address, the ADDE 
process
is not started cleanly (or at all).

I think you need to define the hostname by running:

<as 'root'>
hostname shu1.cup.edu

-- edit /etc/hosts and add
158.83.1.171    shu1.cup.edu shu1        <- NB: whitespace is tabs!

After doing this, try the ADDE access again.

Cheers,

Tom
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: XRR-599672
Department: Support McIDAS
Priority: Normal
Status: Closed