[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CONDUIT feed timing




Thanks Steve,

I'll check out the products in the queue ...

FYI ... GFS came in timely again today. (almost an hour earlier than what we had been seeing over the past few weeks.)





Jerry



Steve Chiswell wrote:
Jerry,

I note that dcarchive.ssec.wisc.edu has redundant feed requests:
http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_topo_nc?CONDUIT+dcarchive.ssec.wisc.edu
so, if the product were scoured out of your queue, you could receive it
again from the other route with the same checksum.

The pqmon output will show the oldest product in your queue (use the -i
option to output at periodic intervals. If your queue size is smaller
than what you wan hold under normal lateny conditions, then you might
receive a product twice. Eg, if pqmon showed that your oldest product
was 1800 seconds old, and you had a latency greater than 1800 seconds,
then you could accept the product when offered it from the other
upstream host (up to your rpc.ldmd maxtime, eg 3600 seconds by default
which is more strictly enforced in LDM 6.1).

Steve Chiswell
Unidata User Support


On Wed, 2005-01-05 at 14:04, Jerrold Robaidek wrote:

Hi Steve,

We did receivie the 18Z GFS 1 degree grib files more timely (more than 30 minutes earlier than in the recent past)

However, we did notice that we had some duplicate grib messages being received.



here is a portion of wgrib output

563:46852448:d=05010412:SPFH:kpds5=51:kpds6=105:kpds7=2:TR=10:P1=0:P2=0:TimeU=1:2 m above gnd:anl:NAve=0 564:46950272:d=05010412:SPFH:kpds5=51:kpds6=105:kpds7=2:TR=10:P1=0:P2=0:TimeU=1:2 m above gnd:anl:NAve=0 565:47048096:d=05010412:PRES:kpds5=1:kpds6=243:kpds7=0:TR=10:P1=0:P2=0:TimeU=1:convect-cld top:anl:NAve=0 566:47069774:d=05010412:PRES:kpds5=1:kpds6=243:kpds7=0:TR=10:P1=0:P2=0:TimeU=1:convect-cld top:anl:NAve=0 567:47091452:d=05010412:5WAVA:kpds5=230:kpds6=100:kpds7=500:TR=10:P1=0:P2=0:TimeU=1:500 mb:anl:NAve=0 568:47197420:d=05010412:5WAVA:kpds5=230:kpds6=100:kpds7=500:TR=10:P1=0:P2=0:TimeU=1:500 mb:anl:NAve=0

note: 567 and 568 have the same checksum

dcarchive{oper}:cksum 567.out
3643681194 105968 567.out
dcarchive{oper}:cksum 568.out
3643681194 105968 568.out


Shouldn't one of these not have come through?

Jerry

Steve Chiswell wrote:

Jerry,

Yesterday at 1830Z we implemented a parallel queueing scheme at the
NWS that we hope will improve the timeliness of data being ingected into
the CONDUIT data stream. Any feedback you can provide on how
this affects your reception would be greatly appreciated.

Since data will be inserted in parallel, you will notice that multiple model runds and forecast times will probably be interspersed where
previously they had been serialized.

I watched the 00Z GFS last night, and the posting gap between f084 and
f132 was matched on the FTP server at 0422Z and later at 0509Z, the
other grids were posted to the NWS servers, so all appears to be
behaving correctly on this end.

Steve Chiswell
Unidata User Support



--
Jerrold Robaidek                       Email:  address@hidden
SSEC Data Center                       Phone: (608) 262-6025
University of Wisconsin                Fax: (608) 263-6738
Madison, Wisconsin