[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20040413: Vietnam and GEMPAK GRIB decoding (cont.)



>From: Mai Nguyen <address@hidden>
>Organization: National Center for Hydro-Meteorological Forecasting of Vietnam
>Keywords: 200312020023.hB20N4p2027742 IDD LDM Linux GEMPAK

Hi Mai,

>I am at home now, but try to put more questions for
>you to get the answer on tomorow morning (sorry for
>being cunning ;-)

It is interesting communicating when our work days are
180 degrees out of alignment :-)

>1) '/' IN THE STATION ID
>
>> >>    48/89
>> >>    48/93
>> >
>> >Those are correct IDs for our "national" stations.
>> I
>> >have put the station index file in
>> >/ldm/VNdata/SYNOP/VN_synstns.txt.

>Are you sure that dclsfc will discard those entries?

Yes, absolutely.  I talked with Chiz about this, and he
said that they should be discarded.  Our test decode
of the AAXX00 file you have on met_research3 confirmed
that the data for stations with invalid IDs did not
get decoded.

>So what can I do with them?
>  
>  a) change the code of dclsfc (to treat '/' in that
>     group as a normal character)?

I would strongly recommend against this.  The metar
decoding software was developed and is supported by
the US NWS.  They are not likely to change the code to
support invalid station numbers, so you would be put
into a position of changing the decoder each time
you installed a new release of GEMPAK.  On top of that,
you would then have to get the source code distribution
and build the distribution from it.  Your life would
be made a LOT harder if you decide to pursue this course
of action!

>  b) find those station ID in the messages, and
>     subsitute the '/'s with some specific number
>     (corresponding to our extra IDs station list with the
>     '/' replaced by that same number) ? 

This is probably the best approach.  You would need
to develop a unique mapping from your invalid station
numbers to valid AND unused ones.  You would then need
to add this infomation to the appropriate GEMPAK station
table so the station location is known and, therefore,
plottable.

>  c) Ignore it. Would then I lose all that reports!

This is what happened when we did the test decode last
week.

>What is your recommendation? (I don't think changing
>the station ID is in my power!).

I agree, b) is the best option.

>2) SCRIPT FOR DCLSFC
>
>I will try it tomorow. So I need to put all sequence
>in a script?

Yes, that is the easiest thing to do.  I was thinking that
one could easily create a Bourne ('sh'), C ('csh'),
or Tcl ('tclsh') script to do the work.  My preference
since I know Tcl would be to do the work in it.  I
find that Tcl is much simpler to use since it has more of
the tools one typically uses built-in.  I may try to
create a short Tcl script that does all of the work
we suggested in previous emails.  I will look at this
when I get a few minutes.  I'll let you know...

>3) MY PREVIOUS QUESTIONS:
>   + Degrib our model data automatically by ldm?
>   + Meta files (how to create them)
>   + Cloud symbols for synop observations

Yes, these still need answering.  They will be addressed
in a separate email.

>4) AND SOME NEW QUESTIONS:
>   + How can I set ldm to automatically delete the old
>     files (older than a certain number of days)?

The LDM comes with a utility named 'scour' that is
used to delete old data.  'scour' is run out of cron
at least once per day.  It is configured by modifying
entries in ~ldm/etc/scour.conf.  When you look at
scour.conf, you wil see how things are setup.  Note that
not all entries in scour.conf are needed on your machine.
They are included as examples of how to setup scouring.

As to running 'scour' out of cron, it is most easily
done as an action to the LDM script 'ldmamdin'.  Here
is an example cron entry that runs 'scour' once per
day at 01:00 local time:

0 1 * * * bin/ldmadmin scour > /dev/null 2>&1

For your convenience, I added this entry to the crontab
file on met_research:

<as 'ldm'>
setenv EDITOR vi
crontab -e

#
# Scour datafiles once per day
1 0 * * * bin/ldmadmin scour > /dev/null 2>&1


Now, all that you have to do is edit ~ldm/etc/scour.conf
and set how many days of data are to be kept in directories
that are named there.

>   + Is that possible to draw isolines based on the
>     synoptic observations? 

Yes.

>   + It's funny with the QUICKSCAT info. When I load
>     it, NMAP2 shows that the information is available, but
>     it shows nothing. The same with ATCF. NMAP2 gives no
>     error for it. But as I looked in the datatype.tbl of
>     gempak, the variables defining directory of the data
>     is not defined. How NMAP2 can see that it's available?
>     Is that a bug of NMAP2?  

I will leave this for Chiz to answer.

>I really hope that you are not to tired with my
>trivial questions.

Not yet ;-)  As you gain more experience with GEMPAK, the
LDM, and Linux, your questions should become more advanced.
That is when the fun begins...

>Thank you very much as always.

No worries.

Cheers,

Tom
--
NOTE: All email exchanges with Unidata User Support are recorded in the
Unidata inquiry tracking system and then made publically available
through the web.  If you do not want to have your interactions made
available in this way, you must let us know in each email you send to us.