[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[CONDUIT #EKQ-308585]: GFS data problem



Hi,

re:
> Sorry about the delay.

No worries.

re:
> Here’s the info you requested:
> 
> ldmadmin config
> queue size:            500M bytes
> 
> pqmon
> 20210825T175422.731354Z pqmon[125697]               pqmon.c:main:314          
>           NOTE  Starting Up (125697)
> 20210825T175422.731494Z pqmon[125697]               pqmon.c:main:355          
>           NOTE  nprods nfree  nempty      nbytes  maxprods  maxfree  minempty 
>    maxext  age
> 20210825T175422.731553Z pqmon[125697]               pqmon.c:main:463          
>           NOTE    2335    19    7449   499690752      9802       69         0 
>     46520 30
> 20210825T175422.731572Z pqmon[125697]               pqmon.c:cleanup:77        
>           NOTE  Exiting

The age of the oldest product in the queue in the 'pqmon' snapshot you
sent indicates that your LDM queue is _much_ too small for the volume
of data you are REQUESTing.  It is likely the case that CONDUIT products
you did, in fact, receive were deleted out of your LDM queue before they
were processed by your 'pqact' action(s).

How much you can increase your LDM queue size to will be a function of
how much physical memory (RAM) is installed in your machine, what
kind of data processing you are doing on the machine, and the amount
of free disk space in the partition where your LDM queue lives.  Whatever
those  numbers are, we recommend increasing your LDM queue
size to at least 2, 4, or 8 GB if you have enough memory.

Question:

- how much memory does your machine have?

  The easiest way to see this is to run:

  cat /proc/meminfo

As a rule of thumb, we would recommend the following _minimum_ LDM
queue sizes:

RAM     queue size
-------+--------------
4 GB    1 GB   this really is still WAY too small!
8 GB    2 GB   we would consider this marginal
16 GB   8 GB

As far as having a large enough queue's effect on "incomplete GFS"
data, we can say that increasing the queue so that the product
residency time in the queue is "enough" (this is not a hard and
fast number!) will decrease or eliminate the situation of products
being scoured out of the queue before they can be processed by
'pqcat' action(s).  It will not solve the situation where the
full set of GFS products is not put into the LDM queue at the
source which is NOAA/NCEP.

re:
> Searching log files for ‘processed’ and ‘oldest’ did not produced any results.

OK.  The 30 second age of the oldest product in your queue is WAY too short.
We try to make sure that the residency times in the LDM queues on our
data server machines is a sizable fraction of 1 hour (3600 seconds).  By
sizeable, we mean > half hour as a minimum.

The procedure for changing your LDM queue size is:

- edit the LDM registry ~ldm/etc/registry.xml

  <size>500M</size>

  in the <queue> section to:

  <size>1G</size>

  Here I am just using '1G' as an example; we're hoping you can make your
  queue at least 2 GB and preferably 4 or 8 GB.

- stop the ldm and delete the current queue

  ldmadmin stop
  ldmadmin delqueue

- create the new queue and start the LDM

  ldmadmin mkqueue    - this may take a little time so be patient
  ldmadmin start

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: EKQ-308585
Department: Support CONDUIT
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.