[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20010430: gempak and system rebuild



Chris,

Sounds like you are getting a race condition with the OS issuing 
the message queue. This is nothing new. If you never had this problem
before, then it is probably a fortuitous coincidence.

Some ways around this are:

1) if you are only using 1 program- eg just gdcntr and not overlaying
   with sfmap for instance, you can use the version of the program
   that links the driver directly to the program- this eliminates
   the need for gplt and the device driver separately. For example,
   gdcntr_gf links the gif driver with gdcntr. You can build these
   from $NAWIPS by issuing "make programs_gf". (I provide them
   in the binary distribution tar file).

2) You can run all your scripts through a master script which 
   prevents multiple copies from running simultaneously.
   I do this in particular with processes that I kick off from the LDM
   since starting up the LDM and getting 1 hours worth of data may
   fire off lots of processes. In these cases, I also try to use the
   _gf programs as well to cut down on processes.

3) stagger the cron execution times.

Steve Chiswell
Unidata User Support




>From: address@hidden (Chris Hennon)
>Organization: UCAR/Unidata
>Keywords: 200104301725.f3UHPwL04455

>Steve -
>
>I've looked into it a bit further and found out that if I run a script by
>itself (no other gempak processes running), it completes normally.  As
>soon as I fire up another script to run at the same time, both start
>spitting out stuff like:
>
>[GDCNTR -3]  Fatal error initializing GEMPLT.
>
>and failing.  I've never had this conflict before (running multiple
>scripts at one time) - can you remind me if there is something I have to
>take care of to resolve this?
>
>Chris
>
>On Mon, 30 Apr 2001, Unidata Support wrote:
>
>> 
>> Chris,
>> GEMPAK programs require Message queues. McIDAS requires shared memory segmen
> ts.
>> 
>> You should know if you have message queues available by running
>> a simple program like gpmap. You should see gplt get fired up.
>> Then your ipcs commans will show the message queue.
>> 
>> Since you are producing gifs, it sounds like you do have message queues.
>> 
>> Nothing in that regard has changed since 5.4. 
>> 
>> Steve Chiswell
>> Unidata User Support
>> 
>> 
>> >From: address@hidden (Chris Hennon)
>> >Organization: UCAR/Unidata
>> >Keywords: 200104301601.f3UG18L00971
>> 
>> >Steve -
>> >
>> >I had been running 5.4 until a hacker attack prompted a system rebuild.  I
>> >thought it would be a good time to upgrade to 5.6A and so did.  I'm
>> >experiencing quite a struggle in getting just a couple of my old scripts
>> >running properly.  What inevitably happens is a script would run normally
>> >for awhile, producing several gifs, and then it would spit out a "could
>> >not fork" error, obviously unable to launch any more processes.  During
>> >this time a lot of dead message queues fill up, even though gpend is
>> >faithfully being executed after each process.  
>> >
>> >What I think may have happened is that we turned off a lot of things that
>> >are potential security holes in Solaris 2.7 - I noticed that when I run
>> >"ipcs" it returns:
>> >
>> >IPC status from <running system> as of Mon Apr 30 11:44:02 EDT 2001
>> >Message Queue facility inactive. 
>> >Shared Memory facility inactive.
>> >T   ID      KEY     MODE    OWNWER  GROUP
>> >Semaphores:
>> >#
>> >
>> >I remember that before, these things were active and root always had one
>> >message queue there.  So, I was wondering if this could be a problem for
>> >me?
>> >
>> >Thanks.
>> >
>> >Chris
>> >
>> 
>> ****************************************************************************
>> Unidata User Support                                    UCAR Unidata Program
>> (303)497-8644                                                  P.O. Box 3000
>> address@hidden                                   Boulder, CO 80307
>> ----------------------------------------------------------------------------
>> Unidata WWW Service                        http://www.unidata.ucar.edu/     
>> ****************************************************************************
>> 
>