>From: Anthony James Wimmers <address@hidden> >Organization: UVa >Keywords: 200202192053.g1JKrYx27851 McIDAS-X AXFORM Tony, re: XCD decoding of grid files >All the changes you made are exactly right. Now, about your >questions: re: did you know that MRF was going away >I didn't know that, so thanks for the heads-up. If the MRF is leaving >anytime before this summer, then I suppose that we can cancel it anytime >on windfall, after I check to make sure we're not using it anywhere. OK. re: The conduit feed has grid 3 fields that you will most likely be interested in (global 1 x 1 degree). This is one of the grids that gribdec.k was unable to decode until my recent work. >Right. Grid 3 would be all we want. OK. We can attend to that after the McIDAS addendum AND after we face the issue of upgrading your McIDAS distribution. re: reduced XCD grid decoding so /p4/data has some breathing room >It's been stable for several days now, so I think you're right. Now, >there's what looks like a related issue to deal with. I thought that >it would fix itself after we fixed the diskspace problem, but apparently >it didn't. The problem is that the GOES images are coming in doubles: > >IMGLIST RTIMAGES/GE-IR.ALL >Image file directory listing for:RTIMAGES/GE-IR > Pos Satellite/ Date Time Center Band(s) > sensor Lat Lon > --- ------------- ------------ -------- ---- ---- ------------ > 1 G-8 IMG 19 FEB 02050 11:15:00 23 71 4 > 2 G-8 IMG 19 FEB 02050 12:15:00 23 71 4 > 3 G-8 IMG 19 FEB 02050 12:15:00 23 71 4 > 4 G-8 IMG 19 FEB 02050 13:15:00 23 71 4 > 5 G-8 IMG 19 FEB 02050 13:15:00 23 71 4 > 6 G-8 IMG 19 FEB 02050 14:15:00 23 71 4 > 7 G-8 IMG 19 FEB 02050 14:15:00 23 71 4 > 8 G-8 IMG 19 FEB 02050 15:15:00 23 71 4 > 9 G-8 IMG 19 FEB 02050 15:15:00 23 71 4 > 10 G-8 IMG 19 FEB 02050 11:15:00 23 71 4 >IMGLIST: done > >The problem manifests itself on our Weather Page by getting all the >times wrong in our 24-hour image loops, so it's kind of important to >fix. OK. >I noticed this problem the same time I noticed the diskspace problem, so >I assume they're related. Yes, they were. >Do you think that I should try cleaning the >spool? That's the only thing I can think of besides looking upstream for >the glitch. The problem turned out to be two invocations of LDM's pqact running when only one should be. The effect of this was that each pqact would decode each image as it came in, so you would get two copies. Two invocations of pqact must have been caused by the disk filling up. Also, there were some routines that had been running for several days and really eating up CPU. The fix for all of this was: <as 'ldm'> ldmadmin stop <kill all 'ldm' processes that would not die> ldmadmin start After doing the above, the load average on windfall dropped from 7 to less than 1. Also, now only one invocation of pqact is running, so you will only get one decoded image for each one ingested into the LDM queue. I did not remake the LDM queue as that did not seem to be necessary. If you see other problems, we may want to revisit this. In that case, I would recommend editing ~ldm/bin/ldmadmin, and increasing the queue size to 400 MB. >Thanks, Let me know if you see anything amiss tomorrow. Tom
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.