[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

19991108: eta and ngm models




David,

Here are the sizes of the NGM and ETA grid 211 (CONUS) files we have recently:
       8516880 Nov  2 20:51 99110300_NGM.wmo
       8516380 Nov  3 09:24 99110312_NGM.wmo
       8479086 Nov  3 20:52 99110400_NGM.wmo
      17106730 Nov  4 10:54 99110412_NGM.wmo
       8436866 Nov  4 21:15 99110500_NGM.wmo
      20604582 Nov  5 14:26 99110512_NGM.wmo
       8435982 Nov  5 21:10 99110600_NGM.wmo
      15479670 Nov  6 13:38 99110612_NGM.wmo
       8453872 Nov  6 21:15 99110700_NGM.wmo
       8644392 Nov  7 09:12 99110712_NGM.wmo
       8480460 Nov  7 21:28 99110800_NGM.wmo
       8508434 Nov  8 09:28 99110812_NGM.wmo

       8534018 Nov  2 20:36 99110300_ETA.wmo
       8517386 Nov  3 08:46 99110312_ETA.wmo
       8508346 Nov  3 20:38 99110400_ETA.wmo
      17207474 Nov  4 10:59 99110412_ETA.wmo
       8466052 Nov  4 20:40 99110500_ETA.wmo
      18978264 Nov  5 14:14 99110512_ETA.wmo
       8466054 Nov  5 20:38 99110600_ETA.wmo
      18201328 Nov  6 13:42 99110612_ETA.wmo
       8484166 Nov  6 20:51 99110700_ETA.wmo
       8719240 Nov  7 09:08 99110712_ETA.wmo
       8502154 Nov  7 20:52 99110800_ETA.wmo
       8507950 Nov  8 09:12 99110812_ETA.wmo


There hasn't been any header change.
Are you referring to a FILE action in pqact, or a decoder action which is
resulting in the 1.2MB files?

Patterns for the Grid 211 NGM or ETA would look like:
^[YZ].Q.*/mNGM
^[YZ].Q.*/mETA


Any more information on what pqact actions you have would be helpful.
Also, check for any occurences of exceding disk space, or if you are sending
data to a decoder like dcgrib then make sure you are not creating files
with other grid projections, such as the #207 alaska or 215 Eta for example.

Since both NGM and ETA come on numerous grid projections, if you pass multiple
grid projections to dcgrib, only those that match the projection of the
file that was created will be written. The @@@ template in dcgrib ensures
that the different grids will be written to different file names, eg:
# NOAAport ETA grids
# Grid #211 80km CONUS:    ^[YZ].Q.*/mETA
# Grid #212 40km CONUS:    ^[YZ].R.*/mETA
# Grid #215 20km CONUS:    ^[YZ].U.*/mETA
# Grid #214 47.5km Alaska: ^[YZ].T.*/mETA
# Select any/all grids desired from [QRUT]
HRS     ^[YZ].[Q].*/mETA
        PIPE    /usr/local/ldm/decoders/dcgrib -d data/gempak/logs/dcgrib.log
                -g /home/gempak/NAWIPS-5.4/gempak5.4/tables
                PACK data/gempak/hds/YYYYMMDDHH_eta_grid@@@.gem

# NGM model output
# Grid #211 CONUS   80km: ^[YZ].Q.*/mNGM
# Grid #207 Alaska  95km: ^[YZ].N.*/mNGM
# Grid #202 CONUS  190km: ^[YZ].I.*/mNGM
# Grid #213 CONUS 47.5km: ^[YZ].H.*/mNGM
# Select any/all grids desired from [QNIH]
HRS     ^[YZ].[QNIH].*/mNGM
        PIPE    /usr/local/ldm/decoders/dcgrib -d data/gempak/logs/dcgrib.log
                -g /home/gempak/NAWIPS-5.4/gempak5.4/tables
                PACK data/gempak/hds/YYYYMMDDHH_ngm_grid@@@.gem





Steve Chiswell
Unidata User Support

On Mon, 8 Nov 1999, David Fitzgerald wrote:

> Hello all,
> 
> Over the past week of so I've noticed that the eta and ngm models are
> coming in very slowly or never completing.  For example, looking in the
> directory where we write the hds data, todays 0Z eta model is only
> 1.2Mb in size.  Our latencies are pretty good, only about 6 minutes, so
> I'm wondering if there were some header changes for the moel data that I
> missed.  Changing our upstream feed site doesn't help nor have I made
> any changes to our ldmd.conf files or our pqact.conf files.
> 
> Any ideas anyone?
> 
> Dave
> 
> 
> 
> 
> 
> 
> ++++++++++++++++++++++++++
> David Fitzgerald   
> System Administrator            Phone: (717) 871-2394  
> Millersville University       Fax: (717) 871-4725 
> Millersville, PA 17551                E-mail: address@hidden
> 
>