[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20050810: ldm pqact sees product twice?



>From: Michael McEniry <address@hidden>
>Organization: UAH
>Keywords: 200508101508.j7AF88jo023528 LDM pqact pqinsert

Michael and Matt,

re:
>We had an odd occurrence with pqact. Here are the relevant 
>log entries:
>
>> * Aug 08 07:18:24 ldm pqact[1845]: 46004110 20050808071819.946     EXP 000
>> UAH-NAMAW12-05080806-05080809 
>
>> * Aug 08 07:32:14 ldm pqact[1845]: 46004110 20050808071819.946     EXP 000
>> UAH-NAMAW12-05080806-05080809 
>
>It looks like pqact "noticed" the same product twice. Since 
>each log entry has the same pqact process ID, I assume pqact 
>was not restarted in between entries.
>
>We're running LDM 6.1.0, and using pqinsert (as a user other 
>than ldm, but in group ldm) to add files to the local 
>product queue. The product was inserted exactly once, as far 
>as we can tell, especially since the entries show the same 
>insertion timestamp.
>
>What might cause this odd behavior? We haven't noticed it 
>happen any other time.

Steve Emmerson, our LDM developer will get back to you with an
authoritative answer on this when he returns from travel.

In order to help us troubleshoot this unusual occurrance, please provide
us with details on the machine running the LDM: operating system (with
rev level), compilers used to build the LDM, etc.  Also, please tell us
how large your LDM queue is and give us some idea of the size of the
products you are inserting and the frequency at which you are inserting
them.  Also, please include the output of pqmon.

One thought that comes to mind is that the product is, in fact, being
inserted into the queue more than once and the second (third, etc.)
insertion is being made after the original product has been scoured
from the queue.  In a case like this, one would expect pqact to be run
again since the product would look new/unique as far as the LDM is
concerned.

>From address@hidden  Wed Aug 10 09:33:15 2005

>... I need to correct the last statement from Michael...  The situation
>described happens a lot - dozens of times a day.

>Matt Smith 
>Information Technology & Systems Center 
>University of Alabama in Huntsville 
>at the National Space Science and Technology Center 
>320 Sparkman Dr. 
>Huntsville, AL 35805 
>ph 256-961-7809     fax 256-961-7859 
>e-mail address@hidden 

If this is one of the machines participating in the SURA SCOOP project
then I am under the impression that the products that are being inserted
into the queue are large.  If the size of the products being inserted
are large relative to the size of the queue, then one can expect that
products previously inserted will be scoured to make room for new
ones.  As soon as a product is flushed from the queue, a reinsertion
into the queue will not be blocked since there will be no product
in the queue with the same MD5 checksum.

Cheers,

Tom
--
NOTE: All email exchanges with Unidata User Support are recorded in the
Unidata inquiry tracking system and then made publicly available
through the web.  If you do not want to have your interactions made
available in this way, you must let us know in each email you send to us.