[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[LDM #NGQ-500529]: ldm questions


> We recently upgraded our AWIPS to OB9.1 and since that time we have been 
> having problems getting our GFS 0.5 degree grid data from our MIDDS/McIDAS 
> ldm to the ldm on ldad. Actually it's not that it's not getting through, the 
> problem is that data past roughly the 75hour forecast doesn't get through to 
> the AWIPS side.  We use wgrib2 to pare down the data on the MIDDS side to a 
> point that each forecast hour file inserted into the queue is approximately 
> 8MB.
> While "watching"  ldm on the ldad side I've noticed for the first forecast 
> hour file, the insert time will be about 3 to 3.5 minutes before the 
> processed time on the left. As each forecast hour file comes in this time 
> difference steadily grows. Right at about the time the 75hour files are 
> getting to ldad side is when this time difference is approaching 1 hour. From 
> my observations when the time difference is greater than 1 hour I will not 
> see the files get in to the ldad/awips  workstations.  One thing to note is 
> that it's not like these larger files are creating a bottleneck on the ldad 
> side either, other files we send are getting through with very little if any 
> delay. Ideas?

It sounds like your queue is too-small for what you're trying to do.  
Apparently, you can insert new files into the MIDDS queue faster than you can 
process the files on Ldad.  This is why the time-difference increases.  
Possible solutions include increasing the size of the Ldad queue, slowing-down 
the insertion-rate, and speeding up the processing on MIDDS.

You can send a USR2 signal to the pqact(1) process on MIDDS that's processing 
the files to have it log when it starts processing a file.  Another such signal 
will put the process into debug-level logging and a third such signal will 
return it to normal logging (this is true for almost all LDM programs).

> Also, while watching this yesterday I did an "ls -l" on the ldm.pq file and 
> it indicated it was ~2.6GB. This was the indicated size prior to the 12Z 
> model run. After watching the 12Z data I looked at the file again and it 
> ndicated it was ~3.9GB.  The default queue size defined in ldmadmin-pl.conf 
> at the time was set to "1G".

That is very odd.  The product-queue isn't designed to grow and we've never 
seen that happen.

Are your product-queue on disks that are physically local to their computers?

> Late yesterday I modified this value to "2G" and rebuild the queue. This 
> morning already "ls -l" indicates the size of ldm.pq is 2705333752 (2.7G). 
> The queue should not be growing in size like this should it?

Absolutely not.  Is the product-queue on some sort of RAID?

> Coincidently with the larger queue this morning I can see with pqcat the 18Z 
> files from yesterday, as well as the 00Z and 06Z files from today and all end 
> at the 75 hour forecast.
> As for particulars we are running ldm 6.6.5 on both sides.

Is the MIDDS computer 32 or 64 bit?  If 32, does it support large files?

> Paul Wahner
> Spaceflight Meteorology Group
> Johnson Space Center
> Houston, TX
> 281-244-6409

Steve Emmerson

Ticket Details
Ticket ID: NGQ-500529
Department: Support LDM
Priority: Normal
Status: Closed

NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.