[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[CONDUIT #YFQ-379002]: Re: missed GFS 1 degree data from CONDUIT



Jerry,

Back in August, we began the transition to the new NCEP LDM and
have since made the top tier LDM's operationally feed the 0.5 degree GFS
and RUC grib2 data via this route, and as a result, we are not feeding the
complete redundant sets of NCEP products at this time to the top tiers as we 
were while testing. We will be transitioning the ensembles next. The good news 
is that as we turn off the data sets from TGSV32, the rest of the data should 
be 
more timely in getting processed into the LDM queue.

I have the complete set coming to a machine here at Unidata that I can work to 
configure 
so that you can get some of the products. Pete would be able to request them
from the top level NCEP on his machine, but would have to configure his allow 
lines
such that the additional products would not be leaked out to unsuspecting 
downstream sites.
I'll notify you of a host and request configuration you can feed from for 
testing.
I just got back in to town, so have some other things pending as well.

Steve Chiswell
Unidata User Support



> 
> Hi Steve,
> 
> Just checking if you have had a chance to look into this yet?
> 
> Thanks,
> 
> Jerry
> 
> Jerrold Robaidek wrote:
> >
> > Hi Steve,
> >
> > Sorry to get back so late on this, but there is never a time when things
> > aren't busy these days.
> >
> > If you would still like to set up SSEC to feed directly from the NCEP
> > ldmhost (ftpprd.ncep.noaa.gov) that would be great.
> >
> > Since you mentioned that the product names will change, I'd like to do
> > this in two stages.
> >
> > I'd first like to start bringing the data in on hutch.ssec.wisc.edu.
> > Once I get all of my users switched to the new feed, I'd then like to
> > use pepe.ssec.wisc.edu as the primary machine getting data from NOAA and
> > turn off the feed to hutch from NCEP.   Pepe.ssec.wisc.edu would then be
> > able to feed down stream sites ... although I don't know how many.  Here
> > are its specs:
> >
> > Specs on pepe.ssec.wisc.edu:
> > Intel(R) Xeon(TM) CPU 3.20GHz
> > 4 GB memory
> > 4 GB product queue
> > running Red Hat Enterprise Linux WS release 4 (Nahant Update 2)
> > *We do use the EXP and SPARE feeds on pepe.ssec.wisc.edu for other
> > products and pepe.ssec.wisc.edu does do some other processing.)
> >
> > Specs on hutch.ssec.wisc.edu:
> > Intel(R) Xeon(TM) CPU 3.40GHz
> > 3 GB memory
> > 4 GB product queue
> > running Red Hat Enterprise Linux WS release 4 (Nahant Update 3)
> >
> >
> > After we switch to pepe as the primary pulling in CONDUIT, I'd like to
> > allow hutch to feed off of someone else so we can have a backup in case
> > pepe goes down.
> >
> >
> > Hutch.ssec.wisc.edu will be our backup McIDAS XCD machine running in
> > parallel with our primary XCD server.  It will be serving GRIB data from
> > the NOAAPORT and CONDUIT feeds via ADDE to internal users and feeding
> > internal users via ldm.
> >
> > When do you think we can point hutch at ftpprd.ncep.noaa.gov ?
> >
> > Thanks,
> >
> >
> > Jerry
> >
> >
> > Steve Chiswell wrote:
> >> Jerry,
> >>
> >> All products from the new server will be unique with regard to
> >> MD5 checksum (eg duplicating the data volume for products
> >> already received until the tgsv32 injection is removed).
> >>
> >> Steve Chiswell
> >> Unidata User Support
> >>
> >>
> >> On Thu, 2006-08-03 at 13:00 -0500, Jerrold Robaidek wrote:
> >>> Hi Steve and Pete,
> >>>
> >>> I would prefer this change occur (early) next week ... I have to get
> >>> all naming conventions from ftpprd ... I may even have them around
> >>> here some place.  I will need to change some of my pqact entries, and
> >>> so will some of my local users.  When is good for you Pete?
> >>>
> >>> I assume the GRIB2 also be available?  I also assume only the paths
> >>> change, and
> >>> the individual grib message names will stay the same.
> >>>
> >>> Will the tgsv32.nws.noaa.gov still feed some sites?   I use
> >>> flood.atmos.uiuc.edu as a backup ... I realize that I'll have to have
> >>> both the old and the new pqact entries, but since the checksum should
> >>> still eliminate duplicates, that shouldn't be a problem.
> >>>
> >>> Thanks,
> >>>
> >>> Jerry
> >>>
> >>>
> >>>
> >>> Steve Chiswell wrote:
> >>>> Pete and Jerry,
> >>>>
> >>>> I would like to ensure that Jerry is getting the best possible data
> >>>> delivery from
> >>>> CONDUIT. At the same time, we need to proceed with the upgrade of
> >>>> CONDUIT to
> >>>> the new NCEP ldmhost which aside from providing a redundant host at the
> >>>> source of
> >>>> data insertion to the LDM will increase the amount of data available to
> >>>> sites.
> >>>> While this has been a time of vaction schedules for many people, we
> >>>> probably
> >>>> want to move forward if at all possible.
> >>>>
> >>>> The complicating factor is that the data set names will transition from
> >>>> the tgftp.nws.noaa.gov file names to the ftpprd.ncep.noaa.gov file
> >>>> names.
> >>>> If Jerry is willing to work with the new file names, I can get the new
> >>>> data
> >>>> turned on to his host immediately.  We can proceed with just the
> >>>> 1 degree data files, or the entire feed as desired.
> >>>>
> >>>>
> >>>> Steve Chiswell
> >>>> Unidata User Support
> >>>>
> >>>>
> >>>>
> >>>> On Tue, 2006-08-01 at 13:43 -0500, Pete Pokrandt wrote:
> >>>>> In a previous message to me, you wrote:
> >>>>>  >
> >>>>>  >Pete,
> >>>>>  >  Unfortunately, I only keep the data from the last couple of
> >>>>> days, so I can'
> >>>>>   >t check from over the  >weekend.  The user has told me that they
> >>>>> have missed data other than from the
> >>>>>   > 6 UTC run....
> >>>>>  >
> >>>>>  >I was able to look at 7/31 ... no GFS messages were missed that day.
> >>>>>
> >>>>> Ok, keep an eye on things, let me know if reception improves now that
> >>>>> I have rescheduled the backup to try to fall in between gfs runs.
> >>>>>
> >>>>> Thanks,
> >>>>>
> >>>>> Pete
> >>>>>
> >>>>>  >
> >>>>>  >So far Today:
> >>>>>  >
> >>>>>  >RUN=00 DAY=20060801 NONE missing
> >>>>>  >RUN=06 DAY=20060801
> >>>>>  >20060801 run=06 forecast=045 expected=328 actual=327
> >>>>>  >20060801 run=06 forecast=048 expected=328 actual=234
> >>>>>  >20060801 run=06 forecast=051 expected=328 actual=236
> >>>>>  >20060801 run=06 forecast=054 expected=328 actual=294
> >>>>>  >20060801 run=06 forecast=057 expected=328 actual=290
> >>>>>  >20060801 run=06 forecast=060 expected=328 actual=297
> >>>>>  >20060801 run=06 forecast=063 expected=328 actual=311
> >>>>>  >20060801 run=06 forecast=066 expected=328 actual=326
> >>>>>  >20060801 run=06 forecast=069 expected=328 actual=327
> >>>>>  >RUN=12 DAY=20060801 NONE missing
> >>>>>  >
> >>>>>  >
> >>>>>  >
> >>>>>  >
> >>>>>  >Jerry
> >>>>>  >
> >>>>>  >
> >>>>>  >
> >>>>>  >
> >>>>>  >
> >>>>>  >Steve Chiswell wrote:
> >>>>>  >> Jerry,
> >>>>>  >>  >> Can you send me your "request" lines related to CONDUIT for
> >>>>> your ldm
> >>>>>  >> so that I can see if this might be related to an auto shifting
> >>>>> problem.
> >>>>>  >> Note that motherlode is downstream of f5 at U. Wisc. so that
> >>>>>  >> might be an issue in what you are looking at.
> >>>>>  >>  >> Steve Chiswell
> >>>>>  >> Unidata User Support
> >>>>>  >>  >>  >> On Tue, 2006-08-01 at 11:34, Jerrold Robaidek wrote:
> >>>>>  >>> Hi,
> >>>>>  >>>
> >>>>>  >>> Has anyone else noticed that over the last few weeks, some of
> >>>>> the CONDUIT   >1 degree GFS data has been  >>> missed?  It's been
> >>>>> happening for a few weeks, but today was the first time
> >>>>>   > we compared our missing  >>> to what motherlode received (or
> >>>>> missed).
> >>>>>  >>>
> >>>>>  >>> Here is an example:
> >>>>>  >>> The status file indicates that there should be 328 grib
> >>>>> messages for 20060
> >>>>>   >801 06 UTC run, the 48  >>> hour forecast.
> >>>>>  >>>
> >>>>>  >>> Motherlode got 227
> >>>>>  >>> We(pepe) got  234
> >>>>>  >>>
> >>>>>  >>> I can not pin point the exact day this began, but one of our
> >>>>> users has not
> >>>>>   >iced intermittent missing  >>> data for at least a couple of weeks
> >>>>>  >>>
> >>>>>  >>> Thanks,
> >>>>>  >>>
> >>>>>  >>> Jerry
> >>>>>  >>>
> >>>>>  >>>
> >>>>>  >
> >>>>>  >
> >>>>>  >--  >Jerrold Robaidek                       Email:
> >>>>> address@hidden
> >>>>>  >SSEC Data Center                       Phone: (608) 262-6025
> >>>>>  >University of Wisconsin                Fax: (608) 263-6738
> >>>>>  >Madison, Wisconsin
> >>>>>  >
> >>>>>  
> >>>>> >=============================================================================
> >>>>>
> >>>>>   >==
> >>>>>  >To unsubscribe conduit, visit:
> >>>>>  >http://www.unidata.ucar.edu/mailing-list-delete-form.html
> >>>>>  
> >>>>> >=============================================================================
> >>>>>
> >>>>>   >==
> >>>>>  >
> >>>>>
> >>>>>
> >>>>> --
> >>>>> +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+
> >>>>>
> >>>>> ^ Pete Pokrandt                    V 1447  AOSS Bldg  1225 W Dayton
> >>>>> St^
> >>>>> ^ Systems Programmer               V Madison,         WI
> >>>>> 53706    ^
> >>>>> ^                                  V
> >>>>> address@hidden       ^
> >>>>> ^ Dept of Atmos & Oceanic Sciences V (608) 262-3086
> >>>>> (Phone/voicemail) ^
> >>>>> ^ University of Wisconsin-Madison  V       262-0166
> >>>>> (Fax)             ^
> >>>>> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>+
> >>>>>
> >>>
> >
> >
> 
> 
> --
> Jerrold Robaidek                       Email:  address@hidden
> SSEC Data Center                       Phone: (608) 262-6025
> University of Wisconsin                Fax: (608) 263-6738
> Madison, Wisconsin
> 
> 


Ticket Details
===================
Ticket ID: YFQ-379002
Department: Support CONDUIT
Priority: High
Status: Closed