[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20050510: CONDUIT feed / GFS 00Z f000



Daryl,

Your laptop address was either strange or not on the list, so I think that is 
why it bounced.

Anyhow, you can see what we received here for the 00Z GFS:
http://motherlode.ucar.edu/cgi-bin/ldm/genweb?raw/conduit/SL.us008001/ST.opnl/MT.gfs_CY.00/RD.20050510/PT.grid_DF.gr1
The above directory totals 23553530 bytes. The .status file for the f000 also 
shows the same size:

/afs/.nwstg.nws.noaa.gov/ftp/SL.us008001/ST.opnl/MT.gfs_CY.00/RD.20050510/PT.grid_DF.gr1/
   fh.0000_tl.press_gr.onedeg complete (23553530 bytes) at Tue May 10 03:21:40 
2005
   Inserted 23553530 of 23553530


I have a status generator which compares the directory contents with what was 
stated to have been inserted
in the .status files that are in the data stream:
http://motherlode.ucar.edu/cgi-bin/ldm/statusgen?SL.us008001/ST.opnl/MT.gfs_CY.00/RD.20050510/PT.grid_DF.gr1/fh.0000_tl.press_gr.onedeg
That shows we got everything.

(see: http://motherlode.ucar.edu/cgi-bin/ldm/conduit_reception.csh for the top 
level)

So, that shows that the data are in the system, and not a problem at the 
source. So, the next thing to
look at is your volume plots and compare versus ours:

Motherlode:
http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_vol_nc1?CONDUIT+motherlode.ucar.edu+-b%2086400

Uni4 (who you are connected through idd.unidata.ucar.edu alias)
http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_vol_nc1?CONDUIT+uni4.unidata.ucar.edu+-b%2086400

The above 2 hosts both show there is a consistent 30GB per day inthe conduit 
stream.

Pircsl4:
http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_vol_nc1?CONDUIT+pircsl4.agron.iastate.edu+-b%2086400

The above shows that Pircsl4 is only getting about 18GB perday, and very 
consistent in that amount,
so likely not any temporary network trouble etc., and nothing new in the 
pattern, so it seems very
well behaved. I see your request pattern is:
(RUC2/#252 |MT.(avn|gfs).*DF.gr1)
That would account for the volume being less than what we have to send you!

The pattern should be fine for the grid #003 f000, and your latency is 
negligible:
http://my.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+pircsl4.agron.iastate.edu

That means, the data is getting to you in good time. As a result, if you are 
not seeing all the data on
your disk, then likely your pqact proccess is falling behind so that the data 
is getting scoured
out of your queue before you get it processed. If necessary, I can send you 
some code for your pq.c
routine that will flag when pqact falls behind.

So...things to look for:
Are you running multiple pqact processes, or a single? Splitting up the pqacts 
shares the load so
they can process faster. Are you decoding the data in that pqact.conf file too, 
or just FILE'ing
as gribs as shown below? Since it is the 00Z f000 time, it might be that your 
system is particularly busy
around this time, so IO is the bottleneck (check decoding, scouring, archiving, 
backups, disk space etc).
If nothing else, try creating a seperate pqact.conf just for the CONDUIT  
"FILE" actions and
launch that separate pqact from ~ldm/etc/ldmd.conf.

Let me know if you have any questions.

Steve Chiswell
Unidata User Support



>From: address@hidden
>Organization: UCAR/Unidata
>Keywords: 200505101509.j4AF9ilH002981

>From address@hidden Tue May 10 09:09:44 2005
>Received: from mailhub-5.iastate.edu (mailhub-5.iastate.edu [129.186.140.15])
>       by unidata.ucar.edu (UCAR/Unidata) with ESMTP id j4AF9hP3002974
>       for <address@hidden>; Tue, 10 May 2005 09:09:43 -0600 (MDT)
>Organization: UCAR/Unidata
>Keywords: 200505101509.j4AF9hP3002974
>Received: from mailout-1.iastate.edu (mailout-1.iastate.edu [129.186.140.1])
>       by mailhub-5.iastate.edu (8.12.10/8.12.10) with SMTP id j4AF9c0M001150
>       for <address@hidden>; Tue, 10 May 2005 10:09:38 -0500
>Received: from akrherz-laptop.agron.iastate.edu(129.186.21.61) by mailout-1.ia
> state.edu via csmap 
>        id 38b7d1c2_c166_11d9_8dfe_00304811d932_18850;
>       Tue, 10 May 2005 10:14:36 -0500 (CDT)
>Date: Tue, 10 May 2005 10:09:36 -0500 (CDT)
>From: Daryl Herzmann <address@hidden>
>X-X-Sender: address@hidden
>To: address@hidden
>Subject: Troubles with 0z GFS
>Message-ID: <address@hidden
> >
>MIME-Version: 1.0
>Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
>
>Hi,
>
>I have been noticing recent troubles with delivery of the 0Z GFS data over 
>Conduit.  I have been trying to debug things here, but haven't been able 
>to discover the problem.  Has anybody else seen issues?
>
>For example, the 0z run last night
>
>$ ls -h -s gfs_2005051000f0000.grib
>  18M gfs_2005051000f0000.grib
>
>Another user here has noted that the F000 file is almost always incomplete 
>(again, 0z runs only).
>
>hopefully I am not alone experiencing these issues...
>
>later,
>   daryl
>
>-- 
>/**
>  * Daryl Herzmann (address@hidden)
>  * Program Assistant -- Iowa Environmental Mesonet
>  * http://mesonet.agron.iastate.edu
>  */
>
--
NOTE: All email exchanges with Unidata User Support are recorded in the
Unidata inquiry tracking system and then made publicly available
through the web.  If you do not want to have your interactions made
available in this way, you must let us know in each email you send to us.


NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.