[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Extended GFS 0.5 degree data



In a previous message to me, you wrote: 

 >Pete,
 >
 >I mentioned the .status file since an SSEC user had a similar problem
 >with the grid #3 data going to XCD where the pattern did not prevent
 >that ascii inventory from getting to their decoder.

Steve,

The .status file is definitely a problem in our raw files.. 

strings gblav2.06061212_F009 | grep 7777 shows

...
7777GRIB
7777GRIB
7777/afs/.nwstg.nws.noaa.gov/ftp/SL.us008001/ST.opnt/MT.gfs_CY.12/RD.20060612/PT.grid_DF.gr2/fh.0009_tl.press_gr.0p5deg
 complete (36827951 bytes) at Mon Jun 12 15:32:29 2006
7777GRIB
7777GRIB

I've modified my pqact line to look like:

CONDUIT 
/afs/.nwstg.nws.noaa.gov/ftp/SL.us008001/ST.opnt/MT.gfs_CY.(..)/RD.20(..)(..)(..)/PT.grid_DF.gr2/fh.0(...).*5deg
        FILE    /data/grib2/gblav2.\2\3\4\1_F\5

It used to be

CONDUIT 
/afs/.nwstg.nws.noaa.gov/ftp/SL.us008001/ST.opnt/MT.gfs_CY.(..)/RD.20(..)(..)(..)/PT.grid_DF.gr2/fh.0(...).*
        FILE    /data/grib2/gblav2.\2\3\4\1_F\5

That should prevent the status files from getting in there, right?

I'll still need to add a line for the new ftp server naming scheme
that you refered to in the first email yesterday, that's next..

Pete



 >
 >The performance issue of decoding higher resolution data is valid
 >(as is decoding the same data at multiple resolutions).
 >
 >As always, please feel free to contact me with problems you might be
 >encountering with the data stream and/or software since these are
 >the problems that will affect many others as well.
 >
 >Steve Chiswell
 >Unidata User Support
 >
 >
 >
 >On Mon, 2006-06-12 at 16:33, Pete Pokrandt wrote:
 >> Steve,
 >> 
 >> Thanks for the quick reply, I'll compare your files to mine and get 
 >> back to you with what I find.
 >> 
 >> Pete
 >> 
 >> In a previous message to me, you wrote: 
 >> 
 >>  >Pete,
 >>  >
 >>  >The GRIB2 data does take more CPU to decode and store in GEMPAK
 >>  >at the moment beacuse it must uncompress the product, and then repack
 >>  >into the GEMPAK GRIB packing. I'm working to modify the storage so that
 >>  >it can simply write out the compressed bits just as GRIB1 decoding does
 >>  >(when there is no bit mask or tiling involved).
 >>  >
 >>  >As for the output file size, it should be the same as the data stream
 >>  >contents. If it is not, then your pqact FILE action may be either:
 >>  >1) writing the .status file into the raw data file
 >>  >2) receiving the data more than once due to queue to small
 >>  >3) disk IO too large so that a second IO stream is opened by pqact
 >>  >
 >>  >If your feed request in LDM has more than one CONDUIT request, you may
 >>  >get the individual GRIB products in a different order, but that woudn't
 >>  >affect decoding an individual GRIB bulletin.
 >>  >
 >>  >As a quick check, you can see what we have received here:
 >>  >ftp://unidata:address@hidden/native/conduit/SL.us008001/ST
  >.opn
 >>   >t/MT.gfs_CY.12/RD.20060612/PT.grid_DF.gr2
 >>  >and tell me if your decoding works on those files and/or if your file
 >>  >size is different.
 >>  >
 >>  >Steve Chiswell
 >>  >Unidata User Support
 >>  >
 >>  >On Mon, 2006-06-12 at 15:48, Pete Pokrandt wrote:
 >>  >> Steve, et al,
 >>  >> 
 >>  >> One other thing I noticed, it takes a LOT more CPU power and 
 >>  >> disk i/o bandwidth to decode the 0.5 deg GFS files in real time
 >>  >> than it does to decode the 1 deg GFS. In fact, f5.aos.wisc.edu
 >>  >> (dual PIII 1 Ghz, 3 Gb RAM, data being written to a 3 disk scsi RAID0
 >>  >> filesystem, 4 Gb ldm queue) was not able to keep up decoding the 
 >>  >> 0.5 GRIB2 files into gempak format. The gempak files were truncated, 
 >>  >> as if the GRIB2 data were being overwritten in the queue before
 >>  >> the decoder had gotten to them.
 >>  >> 
 >>  >> I've currently got the GRIB2 -> gempak stuff running on another machine
 >>  >> (dual athlon MP2600+, 2 Gb ram, SATA RAID1 filesystem, 4 Gb ldm queue)
 >>  >> and it is able to keep up as far as I can tell, with the exception
 >>  >> from my prior email that the raw GRIB2 files from conduit do not match
 >>  >> up size-wise with the files if I ftp them from tgftp, and the CONDUIT
 >>  >> files give wgrib2 errors, and will not work with the WRF.
 >>  >> 
 >>  >> Pete
 >>  >> 
 >>  >> In a previous message to me, you wrote: 
 >>  >> 
 >>  >>  >
 >>  >>  >CONDUIT data users,
 >>  >>  >
 >>  >>  >We would like to increase the 0.5 degree (grid #4) GFS availability t
  >o
 >>  >>  >the CONDUIT data stream as discussed at the working group meeting in
 >>  >>  >January:
 >>  >>  >http://www.unidata.ucar.edu/projects/CONDUIT/Unidata_CONDUIT_Status_a
  >nd_I
 >>   >mple
 >>  >>   >mentation_Update.pdf
 >>  >>  >
 >>  >>  >Currently, the CONDUIT feed provides F000 through F084 fields for the
 >>  >>  >GFS at the half degree resolution in GRIB2 format. We now have access
  > to
 >>  >>  >the grids through F180 (an additional 1.2GB of data per model cycle)
 >>  >>  >and plan to add this to the data stream.
 >>  >>  >
 >>  >>  >The addition of these files will involve a change in source file name
  >s
 >>  >>  >of the existing and additional data files from the NWS ftp server fil
  >e
 >>  >>  >naming convention: 
 >>  >>  >ftp://tgftp.nws.noaa.gov/SL.us008001/ST.opnt/MT.gfs_CY.HH/RD.YYYYMMDD
  >/PT.
 >>   >grid
 >>  >>   >_DF.gr2/fh.FFFF_tl.press_gr.0p5deg
 >>  >>  >to the NCEP ftp server naming convention:
 >>  >>  >ftp://ftpprd.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.YYYYMMDD/gf
  >s.tH
 >>   >Hz.p
 >>  >>   >grb2fFFF
 >>  >>  >
 >>  >>  >If you are currently using the 0.5 degree GFS products and the change
  > of
 >>  >>  >file naming would cause problems for your use of the data, please
 >>  >>  >contact address@hidden as soon as possible.
 >>  >>  >Eventually, we would like to remove the legacy 1.0 degree GRIB1 forma
  >t
 >>  >>  >(grid #3) products currently being sent in CONDUIT for the same
 >>  >>  >F000-F180 time period to allow for additional new data sets. We
 >>  >>  >encourage all users to transition to GRIB2 along with the NWS
 >>  >>  >announced changes so that new data sets can be utilized to their
 >>  >>  >fullest extent.
 >>  >>  >
 >>  >>  >Steve Chiswell
 >>  >>  >Unidata User Support
 >>  >>  >
 >>  >>  >=====================================================================
  >====
 >>   >====
 >>  >>   >==
 >>  >>  >To unsubscribe c2, visit:
 >>  >>  >http://www.unidata.ucar.edu/mailing-list-delete-form.html
 >>  >>  >=====================================================================
  >====
 >>   >====
 >>  >>   >==
 >>  >>  >
 >>  >> 
 >>  >> 
 >>  >> --
 >>  >> +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+
 >>  >> ^ Pete Pokrandt                    V 1447  AOSS Bldg  1225 W Dayton St^
 >>  >> ^ Systems Programmer               V Madison,         WI     53706    ^
 >>  >> ^                                  V      address@hidden       ^
 >>  >> ^ Dept of Atmos & Oceanic Sciences V (608) 262-3086 (Phone/voicemail) ^
 >>  >> ^ University of Wisconsin-Madison  V       262-0166 (Fax)             ^
 >>  >> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+
 >> 
 >> 
 >> --
 >> +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+
 >> ^ Pete Pokrandt                    V 1447  AOSS Bldg  1225 W Dayton St^
 >> ^ Systems Programmer               V Madison,         WI     53706    ^
 >> ^                                  V      address@hidden       ^
 >> ^ Dept of Atmos & Oceanic Sciences V (608) 262-3086 (Phone/voicemail) ^
 >> ^ University of Wisconsin-Madison  V       262-0166 (Fax)             ^
 >> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+


--
+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+
^ Pete Pokrandt                    V 1447  AOSS Bldg  1225 W Dayton St^
^ Systems Programmer               V Madison,         WI     53706    ^
^                                  V      address@hidden       ^
^ Dept of Atmos & Oceanic Sciences V (608) 262-3086 (Phone/voicemail) ^
^ University of Wisconsin-Madison  V       262-0166 (Fax)             ^
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+