[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re[4]: LDM write errors (fwd)




===============================================================================
Robb Kambic                                Unidata Program Center
Software Engineer III                      Univ. Corp for Atmospheric Research
address@hidden             WWW: http://www.unidata.ucar.edu/
===============================================================================

---------- Forwarded message ----------
Date: 21 Mar 2000 15:10:20 -0500
From: Ken Waters <address@hidden>
To: address@hidden
Subject: Re[4]: LDM write errors


     Robb,
     
     I'm just now looking at what you sent me (the process1 script).  I 
     understand what you're doing...I just have to study the code a little 
     more to figure out how you're doing it.  
     
     I will do as you say...copy the script into the LDM directory and run 
     it interactively with the changes you note and let you know what the 
     results are.
     
     Thanks much.
     
     Ken


______________________________ Reply Separator _________________________________
Subject: Re: Re[2]: LDM write errors
Author:  address@hidden at EXTERNAL
Date:    3/1/2000 6:00 PM


Ken,
     
I did the prelimary mod to your process script, called it process1. 
Here's how I would run it:
     
AFOS     ^(...)(...)(...|..)
     PIPE      /home/ldm/process1
     
Notice no parameters so only one script should run at a time.  I put the 
keyword Ken in the script to show the major changes in the code.  Here's 
what I would do, get a raw data file, execute process1 in debug mod, ie
     
% perl -d process1 <  <rawAFOSfile>
     
You should be able to modify the lines to extract the headers once you see 
them in the debugger.  Make the changes to get the header info and change 
the ARGV[0] and ARGV[1] to the proper variable names.  You will also have 
the whole product available to write it out in a file so you don't need
to make any tmp files. The script will wait 20 minutes before exiting so 
there will be no conflicts on STDIN.
     
I attached the file process1.
     
This explaination is what I meant by the comment below: Another solution.
     
     
Robb...
     
     
     
     
     
     
     
     
On 1 Mar 2000, Ken Waters wrote:
     
>
>      Robb,
>
>      Thanks for the replies.  My comments are below. 
>
>      Ken
>
>
> ______________________________ Reply Separator
 _________________________________
> Subject: Re: LDM write errors
> Author:  address@hidden at EXTERNAL 
> Date:    3/1/2000 12:52 PM
>
>
>
> >Their are a couple of things you might want to change to help.  First the 
> >above errors are caused by the PIPE action trying to write to the process 
> >decoder.  Inspecting the process decoder, I didn't find any <STDIN>
> >filehandle reads, therefore the LDM is writing to the decoder but the
> >process decoder is not reading anything. The PIPE action should be changed 
> >to an EXEC action or make the process decoder read the <STDIN> input.
>
>      I tried changing the action to an EXEC and the result was a lot of 
>      errors like this:
>
>         pqact[1063]: child 1310 exited with status 127 
>
>      and no action taken.  I don't know what was going wrong here, but I agree
>      it makes more sense for it to be an 'EXEC' versus a 'PIPE', since I'm not
>      using the <STDIN>.
>
>
>      I also had been using <STDIN> input, but you'll recall that that
>      failed miserably for me because the various instances of the script 
>      were having trouble "competing" for the <STDIN>.  By changing to a
>      file write in LDM and then a file read in the script this problem went 
>      away.
>
>
> >From your pqact.conf file:
>
> ># Run the process script to do the proper actions for this new message 
>
> >AFOS    ^(...)(...)(...|..)
>         PIPE    -strip /home/ldm/process \1 \2 \3 
>
>
> >With this type of pqact entry, a new process decoder is spawned for 
> >every product because the parameters "\1 \2 \3" change for every
> >product. The LDM currently has a limit on the number of processes of 32: 
>
> >/*
> >* Tuning parameter: MAXENTRIES is the number of descriptors
> >* you wish to allocate to this list. You need to leave enough 
> >around * for the error output and for rpc.
> >*/
> >#define MAXENTRIES 32
>
> >If estimate 2-3 products/second then there would be over 150 decoders 
> >spawned / minute.  You could up the limit to 200 if your computer
> >platform/os can handle that many processes.  The downside is that your ldm 
> >configuration would be no-standard.  The file to change is: pqact/filel.c 
> >This is the reason that the process decoder is being spawned twice or more 
> >for every product, the LDM was reaping the process decoder before it was
> >done.
>
>      This may be the solution unless I can figure out another way.  I
>      notice that the problem seems to only occur with the larger products, 
>      usually > 6000 bytes.
>
>
> >Another solution:
>
> >You could read the product in your decoder, extract the header then do the 
> >rest of the decoder processing.  That;s how the current UPC perl decoders 
> >work. The major pro side is that there is only one decoder running.
>
>
>      I'm sorry, I don't follow you here.  Do you mean to jump to another 
>      script from the first one?
>
>
> >Observation:
>
> >In the decoder process there are many "system("mv ... " type calls.  For 
> >every system call a fork is done, this is time consuming and resourse
> >consuming to the LDM and OS.  It would be better to extract all the system 
> >calls to another external script that could be run out of cron. If you
> >named your file something.new, then the external script could do the moves 
> >external to the LDM. In my opinion this would be a much cleaner solution. 
>
> >Robb...
>
>      Not sure how those calls could be run from cron, since their receipt
>      is event-driven, rather than schedule-driven.  Perhaps the Perl "copy" 
>      command would do better than using "system".  I think when I tried to 
>      make this change I had errors; I could revisit the situation.  Right
>      now, system performance, based on "top" analysis and script execution 
>      times, does not seem to be hindered.
>
     
=============================================================================== 
Robb Kambic                          Unidata Program Center
Software Engineer III                  Univ. Corp for Atmospheric
 Research
address@hidden             WWW: http://www.unidata.ucar.edu/ 
===============================================================================