[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20050817: product queue size for leaf nodes



>From: Paul Prestopnik <address@hidden>
>Organization: NCAR/RAL
>Keywords: 200508171522.j7HFMajo004459 LDM queue

Hi Paul,

>We are in the planning stages for our network topology for our reworked 
>LDM network.  It seems like for leaf nodes (computers with no one 
>downstream from them i.e. no 'allow' lines in their ldmd.conf) the 
>product queue size is largerly irrelevant.

Correct.

>Is there any reason why a 
>computer with no one downstream would need to have a product queue of more 
>than a few minutes?

No, there is not.  The "few minutes", however, must reflect how fast
pqact.conf actions are completed in the aggregate.  If you are doing a
lot of processing of the data received, and if it takes your pqact(s) a
"significant" amount of time to do the processing, you must make sure
that your queue is large enough to hold all of data that is to be
processed before it gets scoured by arrival of new data.

Our observation is that a reasonable minimum size for a queue is 400
MB.  This may be overkill in a number of cases, but it helps sites that
have not tuned their systems to cope with their inefficiency.  BTW, 400
MB is the default queue size specified in the ~ldm/etc/ldmadmin-pl.conf
file.

>We may be forced to use computers with a relatively low amount of RAM for 
>some of our leaf nodes (2-4GB RAM).  These computers will process the data 
>via pqact, but will not serve it to anyone else.  It seems like it would be 
>better to simply make the queue large enough to fit entirely in memory, 
>rather than being concerned with keeping an hour of data, right?

You are absolutely correct.  A leaf node's configuration needs are much
different than a relay node's.

As an aside: one does not necessarily need to be as concerned with a
queue larger than real memory in the leaf node case.  This comment
_assumes_, however, that the processing of data out of the queue is
being done fast enough so that the new data is always at the "front" of
the queue.  If this is the case, then having a large queue does not
matter in two senses:  first, the new data only occupies a small
portion of the queue (so, as you note above, the queue could be
smaller); second, the OS is not likely to be swapping the "new" section
of the queue to disk.

Last comment:  while you are reconfiguring your setup, please use
pqmon to see if your queue is efficiently setup wrt number of slots
versus size.  This tuning step is one that most sites don't take, but
it is worth the effort to make things run efficiently.

>Thanks,

No worries.

Cheers,

Tom
--
NOTE: All email exchanges with Unidata User Support are recorded in the
Unidata inquiry tracking system and then made publicly available
through the web.  If you do not want to have your interactions made
available in this way, you must let us know in each email you send to us.