[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LDM 6.2.1 possible memory issue?



Steven,

>Date: Tue, 08 Mar 2005 13:13:23 -0600
>From: "Steven Danz" <address@hidden>
>Organization: Aviation Weather Center
>To: Steve Emmerson <address@hidden>
>Subject: Re: LDM 6.2.1 possible memory issue?

The above message contained the following:

> Done, though it doesn't appear like I have to wait until the end of the day:
> 
> 20050308.184502 9562 2384 rpc.ldmd 0.1
> 20050308.184502 9563 795632 pqact 0.5
> 20050308.184502 9564 796992 rpc.ldmd 0.5
> 20050308.184502 9565 796992 rpc.ldmd 0.2
> 20050308.184502 9566 796992 rpc.ldmd 0.1
> 20050308.184502 9569 796360 rpc.ldmd 0.5
> 20050308.184502 9617 796520 rpc.ldmd 0.2
> 20050308.184601 9562 2384 rpc.ldmd 0.0
> 20050308.184601 9563 795764 pqact 0.3
> 20050308.184601 9564 797776 rpc.ldmd 0.4
> 20050308.184601 9565 797776 rpc.ldmd 0.1
> 20050308.184601 9566 797776 rpc.ldmd 0.3
> 20050308.184601 9569 796360 rpc.ldmd 0.3
> 20050308.184601 9617 796520 rpc.ldmd 0.2
> 20050308.184601 11267 796528 rpc.ldmd 0.1
> 20050308.184601 11268 796528 rpc.ldmd 0.1

The above only has two data-points for LDM process 9564 (which I assume
is receiving data).  I need many more data-points than that.  Let it run
for at least an hour.

> I restarted at 18:44, so this is just after the restart.  We don't run
> rtstats, so I replaced that with pqact since it seems to have similar
> issues.  I can also switch back to 6.0.14 to see if the issue goes
> away or not if you are curious about that.  However, I ran the script
> on a RHEL3 system with 6.0.14 and an 800Meg queue and it reported:
> 
> 20050308.185349 15322 1980 rpc.ldmd 0.0
> 20050308.185349 15323 7936 pqact 1.2
> 20050308.185349 15326 27752 rpc.ldmd 0.8
> 20050308.185349 15327 27752 rpc.ldmd 0.3
> 20050308.185349 15328 27752 rpc.ldmd 0.4
> 20050308.185349 15329 27824 rpc.ldmd 0.8
> 20050308.185349 15339 2244 rpc.ldmd 1.3
> 20050308.185349 15343 2436 rpc.ldmd 1.2
> 20050308.185349 15701 2444 rpc.ldmd 1.4
> 20050308.185349 15717 2444 rpc.ldmd 0.8
> 20050308.185349 15718 2444 rpc.ldmd 0.8
> 20050308.185349 15719 2444 rpc.ldmd 1.0
> 20050308.185349 15720 2444 rpc.ldmd 1.0
> 20050308.185349 15722 2456 rpc.ldmd 0.8
> 20050308.185349 1880 2456 rpc.ldmd 0.9
> 20050308.185349 31570 2456 rpc.ldmd 1.0
> 20050308.185349 32693 2456 rpc.ldmd 0.7
> 20050308.185349 29623 2456 rpc.ldmd 0.9

Again, too few data-points.

> Even though a VSS jump from ~28Meg to ~790Meg seems to be an issue,

Depending on the operating-system and the version of the LDM, the
memory-mapped product-queue might or might not be counted against the
virtual size of the process.  Consequently, I'm not surprised by
rpc.ldmd processes that are slightly larger than the size of the
product-queue.

Regards,
Steve Emmerson


NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.