[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20030905: LDM-6 installation at UNCA (cont.)



>From: "Alex Huang" <address@hidden>
>Organization: UNCA
>Keywords:  200307301533.h6UFXPLd024978 LDM McIDAS upgrade ETA GFS GEMPAK

Hi Alex,

>Thank you for the update and continuing help,

Glad to help.  On the selfish side, helping getting a site setup
well makes our support job easier down the road.

>and congratulations to Chiz, so he has three now?

No, two.  A boy (Chase) and a girl (Cara).

By the way, Chiz pointed out to me that the NOAAPORT ETA grids only go
out to 60 hours, and the only AVN/GFS grids that go out that far are
the AWIPS ones.  It is likely that Matt/whoever was expecting forecasts
to go out further in the future than they do for that datastream.  The
CONDUIT stream, on the other hand, does have model forecasts that have
forecasts further into the future.  If you want to think about getting
the CONDUIT stream, you should compare the relative volumes of it
versus the NOAAPORT HDS steam:

HDS volume:
http://www.unidata.ucar.edu/cgi-bin/rtstats/iddstats_vol_nc?HDS+thelma.ucar.edu

CONDUIT volume:
http://www.unidata.ucar.edu/cgi-bin/rtstats/iddstats_vol_nc?CONDUIT+thelma.ucar.edu

These plots show that the HDS stream contains up to 400 MB per hour
while the CONDUIT stream contains up to over 1.8 GB per hour.

Also, to alter my comment about McIDAS-XCD decoding the ETA and AVN
data, I didn't look hard enough at the ETA data.  The XCD-produced GRID
files contain forecasts out to 60 hours for 0 and 12 Z, and out to 48
hours for 6 and 18 Z.  GRIDS for AVN/GFS do go out to 120 hours, but
they are the AWIPS grids as mentioned above.

Cheers,

Tom

>From address@hidden Fri Sep  5 10:50:45 2003

>From: Matt Rosier <address@hidden>
>Organization: UNCA
>Keywords:  200307301533.h6UFXPLd024978 LDM McIDAS upgrade ETA GFS GEMPAK

Hi Matt,

>Do you know how we can correct the problem of not receiving (or not decoding 
>perhaps) Eta data past 60 hrs or so and AVN data past 84 hrs?

I just looked at the McIDAS-XCD decoded GFS data on storm2, and it
extends to 120 hours, the McIDAS-XCD decoded ETA extends to 72 hours,
and the volume of HDS data you are ingesting indicates that nothing
seems to be missing.

Given all of this, I must assume that the problem you are seeing is in
GEMPAK?  If yes, I will ask Chiz to check into this.

>Thanks,

I do see a small 'time' problem on storm2.  You can see from the
latency plot for any feed storm2 is ingesting that the clock is
drifting.  For example, take a look at:

http://www.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?IDS|DDPLUS+storm2.atms.unca.edu

I was going to add running of xntpdate from 'root's crontab file, but
xntpd is already running:

% ps -aux | grep ntp
ntp        750  0.0  0.2  2400 2392 ?        SL   Aug27   0:01 ntpd -U ntp -g

An incorrect clock is not the cause of you not having the decoded HDS
data you want, but it will become a problem as the magnitude of the
time offset on storm2 worsens.  Given this, I decided to investigate.

Quickly looking through the ntpd configuration file, /etc/ntp.conf, I
see what may be an error:

restrict 152.18.69.33 255.255.255.255
server 152.18.69.33             # candler.cs.unca.edu
restrict 152.18.68.2 mask 255.255.255.255
server 152.18.68.2              # craggy.unca.edu

Notice how the first 'restrict' line has an IP address followed
immediately by a netmask, but the second contains the word 'mask'
between the two.  The correct syntax is to include 'mask' between the
IP and the netmask.  This may be the problem of the clock not being
set.  I corrected this entry.

I got together with my system administrator and we decided to run
ntpdate on storm2 instead of ntpd.  To get this working, we did the
following (all done as 'root'):

1) stop ntpd:

/etc/init.d/ntpd stop

2) prevent ntpd from being run at boot:

chkconfig ntpd off 5

3) add an entry to 'root's cron that runs ntpdate once per hour:

#
# Set system clock
0 * * * *       /usr/sbin/ntpdate 152.18.69.33 > /dev/null

4) verified that the clock is now correct on storm2

Finally, since I promised to report back all of the changes we made on
storm2, I offer the following which is more-or-less complete:

- as we noted in a previous email, we modified the 'exec "pqact..."' entry
  in ~ldm/etc/ldmd.conf so that the -d flag is now '-d /home/ldm'.  Previously,
  it was '-d /home/ldm/data' which was non-standard

- we split the pqact.conf files up into a GEMPAK one, a general one that
  contains the needed actions for McIDAS, and three ones for DIFAX map
  printing.  We altered the cron job that rotated DIFAX printing pqact
  files, and changed stopping and restarting the LDM into sending a
  HUP signal to pqact.  This is much more efficient.

- I moved where the decoded McIDAS data was being stored from
  /data/mcidas to /data/ldm/mcidas.  In order to make all of the McIDAS
  XCD and ADDE serving setup keep working, I had to change the set of
  file REDIRECTions that were being used (so that McIDAS will find
  the data files in the new location), and I had to change the McIDAS
  string XCDDATA to point to the new directory:

<as 'mcidas'>
cd ~mcidas/workdata
te.k XCDDATA \"/data/ldm/mcidas

  Since there was movement of where data is stored, I also updated the
  REDIRECT local template copy ~mcidas/data/LOCAL.NAM to reflect
  the new directory location.

- I installed McIDAS-X v2003 on three UNCA machines: storm2, typhoon, and
  lab1.  I uninstalled and then reinstalled the ADDE remote server
  configuration as 'root':

cd /home/mcidas
sh ./mcinet2002.sh uninstall mcadde
sh ./mcinet2003.sh install mcadde

  This was done on all three machines.

- I updated the LOCAL.NAM file on typhoon and lab1 with the same settings
  as are on storm2.  I then made the REDIRECTions active by restoring them
  as 'mcidas':

cd ~mcidas/workdata
redirect.k REST LOCAL.NAM

  This was done on all three machines.

- I logged onto typhoon and lab1 as the user 'atms' and setup the McIDAS
  environment so that it would use the datasets on storm2 where available
  and on remote machines for datasets that storm2 does not have.  Things
  appear to be working correctly.

My observation is that McIDAS-XCD data decoding is working correctly
and that all of the data that is being created can be served by the
ADDE remote server on all three machines: storm2, typhoon, and lab1.
The seving on typhoon and lab1 is possible since the data directory
where XCD is writing its output on storm2 is mounted on both typhoon
and lab1.

As I finish this, I am waiting for Chiz to get into work (he is a new
dad) so I can ask him about the ETA and GFS decoding problem you
reported.

Cheers,

Tom