[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[McIDAS #QNI-213974]: IMGCHA problems

Hi Paul,

Sorry for the very tardy reply...

> Well we survived the storm!

We are having our own spate of weather here in Colorado.  I got 1' of
snow on Saturday - Sunday night, and we are expecting about the same
starting sometime tonight.  The other thing is that the temperature
dropped down to about -25F at my house in the middle of last week.
I don't mind the cold since I have a nice wood stove that keeps us

> I had been unable to get our site updating
> during it. Still trying to find out why so much failed. I figured out a
> few things, some that lead to some questions. For simplicity:
> 1. We had to switch to UCAR.ADDE data feeds. I am not sure why our local
> stuff is not working correctly. Gilbert did an upgrade and I am sure that
> I do not know XCD....perhaps you can log on and take a look? Same
> login/password as before. If you need it, I will send it.

Can you send the access information for the 'mcidas' and 'ldm' accounts
to my private email?  address@hidden  It would be best if
you do not mention the account name or password in the same email that
gives the name of the machine for which the logins correspond.

> 2. With the data feed problems, we were not getting the text correctly
> either. WWDISP was hanging causing a process backlog. Is there a way to
> time WWDISP out in case of future problems?

WWDISP should time out after 2 minutes.

> Also, could the text issues be
> the reason our WWDISP was missing stuff? I really want to use WWDISP but
> it is not so good yet.

I have been plotting stuff using WWDISP right along with no problems.
I have been doing this by accessing the RTWXTEXT dataset either on
adde.ucar.edu or adde.cise-nsf.gov:

<as 'mcidas' or in the script you are running as 'ldm'>

> 3. The IMGCOPY and IMGCHA commands were working well now. Not sure what
> had happened with the SAT alias....it works now. Might have been when the
> processes were backlogged.


> 4. Is there a good way to open up sessions? I have  a lot of images I am
> generating several times an hour. I run it with a kickoff environment but
> the perl script open the mcidas session that passes on variables for BATCH
> files to run. I open about 30 sessions sequentially. Is there a way to
> open one session and pass the parameters to batch files without exiting
> the shell and opening a new one? Not sure if that is clear.

You could follow the route laid out in the example Bourne shell script
included in the Unidata McIDAS distribution:  mcrun.sh:

mcenv << EOF

# put McIDAS-X commands you want to run here, one command per line.

# Example (note that these lines are commented out!!):
batch.k p1 p2 .... MYBATCH1.BAT
batch.k p3 p4 .... MYBATCH2.BAT

# done



p1, p2 - parameters to pass to the McIDAS BATCH file
p3, p4 - parameters to pass to the McIDAS BATCH file
MYBATCH1.BAT - name of a McIDAS BATCH file
MYBATCH2.BAT - name of a McIDAS BATCH file

> I am just wondering how much processing power we need to do so many
> automated images.

If the processing is serialized, you don't need much.  If the processing
occurs in parallel, then you might need to adjust the amount of shared
memory that your system is configured for.  One thing you definitely
have to pay attention to when doing processing in parallel is to
make sure that the various processes don't interfere with each other.
Interference would occur, for instance, when more than one McIDAS script
modifies a McIDAS resource (e.g., string table value, etc.) that another
one needs to use and/or modify.  Serializing the processing should
solve that problem for you.

> Ok that is all for now. BTW Your help for so many years is so greatly
> admired and appreciated!

No worries.  Again, sorry for the slow reply.

> On Mon, 7 Feb 2011, Paul L. Sirvatka wrote:
> > when i point to local data on mcidas, i cannot run such commands as sfccon.
> > something may have gotten hosed. I do not know anything about MCIDAS XCD. 
> > any
> > advice?
> Hmmm. On that one, talk to Tom. If there's a bug in the software, I'd rather
> him tackle it. I checked your ldmd.logs, nothing in there showing an issue. 
> You
> are getting IDS|DDPLUS, and the decoder shows in the McIDAS XCD log that it is
> working. The clock is correct, so...(shrug)
> # NOAAPORT-broadcast observational data
> IDS|DDPLUS      .*      PIPE
> /home/ldm/decoders/xcd_run DDS
> This seems to be working fine, too.
> But...mcscour.sh wasn't working. I ran it manually, and it works fine.

OK, this is likely the problem.  If McIDAS POINT data files (MDXXnnnn) don't
get scoured before they get to be 10 days old, new data will be decoded into
the end of existing files.  The problem with this is that MDXX files have
a fixed size.  Once a file gets filled to capacity, no new data will be written,
and this will not be reported as an error.

My hunch is that the upgrade that was done replaced the mcscour.sh file
being run by the user 'ldm', and the old file had something set that
was needed and that the new copy does not have.

> but from crontab, it doesn't. You may want to have Tom check that out, because
> rewriting of old files can cause this.

I can't explain this other than the environment in which a process will
run from cron will be different than the one that is run from the command
line of a logged-in user.


Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
Unidata HomePage                       http://www.unidata.ucar.edu

Ticket Details
Ticket ID: QNI-213974
Department: Support McIDAS
Priority: Normal
Status: Closed

NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.