[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: virtual upstream LDM and firewalls (was: LDM 6.4.1 Questions)



Larry,

>Date: Thu, 18 Aug 2005 18:30:48 -0500
>From: Larry Hinson <address@hidden>
>Organization: NOAA/NWS
>To: address@hidden
>Subject: LDM 6.4.1 Questions

The above message contained the following:

> I recently attended the Unidata LDM workshop a few weeks ago.  Just 
> wanted to say I thought the course was great.  I gained a lot of 
> understanding and appreciation for the LDM system.  

I'm glad you found it useful.

> I have some questions about a uniquely configured LDM system we use here 
> for transferring products from a network outside of AWIPS through a 
> firewall into the AWIPS system. 
> 
> We have what is known as a PDS (Product Distribution Server)  which 
> "sends" products through a firewall to a Linux system internal to 
> AWIPS.  This is our primary means of sending products out to the world.  
> The PDS Server has the upstream LDM and the AWIPS system has the 
> downstream LDM.  Products are put into the LDM que via pqinsert on the 
> PDS system, and the LDM on the AWIPS side executes a script via pqact to 
> forward the products onto the AWIPS WAN.  The PDS Server has LDM version 
> 6.2.1 and the AWIPS Linux box has LDM version 6.4.1. The frequency of 
> products per hour can vary from 3/hour to 15/hour ranging from 500-4000 
> bytes in size.  Products are expired out of the que, once they are 
> forwarded out of AWIPS. (This allows retransmits).   We have recently 
> changed our firewall  (from an HP to a Juniper)  between the two systems 
> (yesterday), and I have been questioning what is normal and what 
> isn't.      The firewall rules are strict such that we can bring data in 
> via port 388, but not allow data back out.  That is you can do an LDM 
> ping from the AWIPS box to the upstream LDM.  However, I can't do an 
> ldmping  from the upstream LDM on the PDS server to the downstream one 
> on AWIPS
> 
> Questions:
> 
> 1.  I am seeing about a 24 second time lag between when the products are 
> first put into the que via pqinsert, and finally logged as being 
> received and processed on the AWIPS side.  Is this 24 second time normal 
> between differing systems?  Products are transmitted into the system 
> maybe 1 at a time or 3 at a time. 

A 24 second latency indicates a problem.  I suspect that the pqinsert(1)
is not being executed by the top-level LDM server as a result of an
EXEC entry in the LDM configuration-file but, rather, is being executed
"outside" the LDM system on the PDS.  This means that the SIGCONT
signals that pqinsert(1) sends to its process group to notify the LDM
processes that a new data-product has been inserted are not being
received by the upstream LDM process.  The failsafe behavior of an
upstream LDM in these circumstances is to awaken every 30 seconds to see
if something new has been inserted into the product-queue.

Possible solutions include

    1.  Executing the pqinsert(1) process as a child process of the
        top-level LDM server.  This would require an appropriate EXEC
        entry in the LDM configuration-file.

    2.  Having whatever is executing pqinsert(1) outside the LDM send a
        SIGCONT signal to the LDM process group.  This can be done via
        the following command

            kill -s CONT -<<pid>>

        where <<pid>> is the process-Id of the top-level LDM server
        (which is in the file $LDMHOME/ldmd.pid).

> 2.  The PDS server is a dual system with failover capabilities via 
> heartbeat, and a floating IP.   (LDM runs on either server but not 
> concurrently at the same time, version 6.2.1). In our tests of failing 
> over the system,  the  LDM is activated on the second PDS server and LDM 
> is shutdown on the primary.   During the time the system is failing 
> over, the AWIPS LDM logs may report that it cannot see the upstream LDM, 
> and the second LDM kicks in gear, then says it is connected.    This, I 
> consider normal as the process of failling over takes several minutes.  
> The anomaly occurs though, when trying to do a pqinsert, and the 
> downstream LDM does not acknowledge anything in the que even after 
> several minutes have elapsed.  The only way to get it to work is to 
> restart the downstream LDM.  This I consider abnormal.  At that point it 
> re-requests the last-hours worth of products, which makes sense, since 
> all of the products have been intentionally expired off the que.

I'm not sure I understand and have some questions:

    1.  Is the pqinsert(1) being executed on the new system rather than
        the old one?

    2.  After the upsteam host changes, does the downstream LDM indicate
        that it has reconnected to the upstream LDM?

    3.  Does the downstream LDM connect using a hostname or IP address?

    4.  Does the new upstream LDM on the new system indicate that the
        downstream LDM has successfully connected?

    5.  Are the product selection criteria printed in the upstream and
        downstream logfiles what you would expect?

    6.  Can you send me relevant excerpts from the three LDM logfiles?

> 3.  Our new Firewall also has a failover capability with a primary and 
> secondary system.  The same anomalous behavior, as described above also 
> occurs.  However, because the failover is instantaneous, LDM logs on the 
> AWIPS side do not indicate any discontinuity in operation.  LDM just is 
> unable to receive products and act on them, until I restart the LDM.

I suspect that the problem lies with the firewall.  When the upstream
LDM host changes, the downstream LDM still has what it considers to be a
valid TCP connection to the (now defunct) upstream LDM.  The existance
of this connection should also be in the internal tables of the
firewall.  When the downstream LDM creates a new TCP connection to the
upstream LDM server in order to send an IS_ALIVE message, the firewall
should route that new connection request to the new upstream host.  Is
suspect, however, that the firewall is, instead, routing the new
connection request to the old upstream host -- resulting in a timeout or
something similar.  Under these circumstances, a downstream LDM will
assume that the upstream LDM is still alive (although it will continue
to try to verify that assumption).

When you restart the downstream LDM, the act of stopping the LDM causes
the downstream LDM to close its TCP connection to the (defunct) upstream
LDM.  I suspect the firewall is noticing the TCP close-connection
packets and clearing its internal tables.  Consequently, the subsequent
TCP (re)connection to the "upstream" LDM is correctly routed to the new 
upstream host on a differenct interface rather than the defunct one.

You should communicate this hypothesis to a network administrator.

> 4)  With the new firewall in place, I'm also seeing problems where the 
> downstream LDM reaches a state after an where it will not receive 
> products from the upstream LDM after abou an hour has elapsed.    Yet 
> the logs shows the repeated message upstream LDM is alive every 60 seconds.

Are the data-product selection-criteria printed in the logfiles what you
would expect?

> I guess I don't understand why the LDM is reaching the state where it 
> won't receive anymore products during a failover on either the upstream 
> LDM or the firwall.  Have you seen this type of behavior at other sites?

We haven't seen this exact problem but we've seen similar ones.  In
every case the problem was with the firewall or routing (ip) tables.

> I am seeing a discontinuity on the upstreadm logs with an error message 
> "ERROR: requester6.c  192 RPC:System error" roughly every 6-7 minutes"

This could be related to the above hypothesis.

> Attached is the file.
> 
> Thanks for any information you can offer...
> -Larry
> 
> --------------050309020000040000000504
> Content-Type: text/plain;
>  name="pdslog.txt"
> Content-Transfer-Encoding: 7bit
> Content-Disposition: inline;
>  filename="pdslog.txt"
> 
> Aug 18 22:53:39 awipsfw[20544]: Exiting
> Aug 18 22:53:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 22:53:39 rpc.ldmd[19568]: child 20544 exited with status 0
> Aug 18 22:54:39 awipsfw[20686]: Connection from awipsfw.awc
> Aug 18 22:54:39 awipsfw[20686]: Exiting
> Aug 18 22:54:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 22:54:39 rpc.ldmd[19568]: child 20686 exited with status 0
> Aug 18 22:55:39 awipsfw[20981]: Connection from awipsfw.awc
> Aug 18 22:55:39 awipsfw[20981]: Exiting
> Aug 18 22:55:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 22:55:39 rpc.ldmd[19568]: child 20981 exited with status 0
> Aug 18 22:56:27 rpc.ldmd[19568]: RPC buffer sizes for admin.awc: send=87380; 
> recv=87380
> Aug 18 22:56:27 admin[21162]: Connection from admin.awc
> Aug 18 22:56:27 admin[21162]: ldmd.c:708: Client LDM closed connection
> Aug 18 22:56:27 admin[21162]: Exiting
> Aug 18 22:56:27 rpc.ldmd[19568]: SIGCHLD
> Aug 18 22:56:27 rpc.ldmd[19568]: child 21162 exited with status 0
> Aug 18 22:56:39 rpc.ldmd[19568]: RPC buffer sizes for awipsfw.awc: 
> send=87380; recv=87380
> Aug 18 22:56:39 awipsfw[21221]: Connection from awipsfw.awc
> Aug 18 22:56:39 awipsfw[21221]: Exiting
> Aug 18 22:56:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 22:56:39 rpc.ldmd[19568]: child 21221 exited with status 0
> Aug 18 22:56:40 awipsfw(feed)[2248]: up6.c:287: nullproc_6() failure to 
> awipsfw.awc: RPC: Timed out
> Aug 18 22:57:04 192.168.31.66[19570]: ERROR: requester6.c:457; 
> ldm_clnt.c:277: Couldn't connect to LDM 6 on 192.168.31.66 using either port 
> 388 or portmapper; ldm_clnt.c:116: : RPC: Remote system error - Connection 
> timed out
> Aug 18 22:57:04 192.168.31.66[19570]: Sleeping 30 seconds before retrying...
> Aug 18 22:57:34 192.168.31.66[19570]: Desired product class: 
> 20050818215734.856 TS_ENDT {{FNEXRAD,  ".*"}}
> Aug 18 22:57:39 awipsfw[21628]: Connection from awipsfw.awc
> Aug 18 22:57:39 awipsfw[21628]: Exiting
> Aug 18 22:57:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 22:57:39 rpc.ldmd[19568]: child 21628 exited with status 0
> Aug 18 22:58:21 rpc.ldmd[19568]: RPC buffer sizes for localhost.localdomain: 
> send=87380; recv=87888
> Aug 18 22:58:21 localhost[21934]: Connection from localhost.localdomain
> Aug 18 22:58:21 localhost[21934]: ldmprog_4: ldmping from 
> localhost.localdomain
> Aug 18 22:58:21 localhost[21934]: ldmd.c:708: Client LDM closed connection
> Aug 18 22:58:21 localhost[21934]: Exiting
> Aug 18 22:58:21 rpc.ldmd[19568]: SIGCHLD
> Aug 18 22:58:21 rpc.ldmd[19568]: child 21934 exited with status 0
> Aug 18 22:58:39 rpc.ldmd[19568]: RPC buffer sizes for awipsfw.awc: 
> send=87380; recv=87380
> Aug 18 22:58:39 awipsfw[21947]: Connection from awipsfw.awc
> Aug 18 22:58:39 awipsfw[21947]: Exiting
> Aug 18 22:58:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 22:58:39 rpc.ldmd[19568]: child 21947 exited with status 0
> Aug 18 22:59:39 awipsfw[22089]: Connection from awipsfw.awc
> Aug 18 22:59:39 awipsfw[22089]: Exiting
> Aug 18 22:59:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 22:59:39 rpc.ldmd[19568]: child 22089 exited with status 0
> Aug 18 23:00:39 awipsfw[22271]: Connection from awipsfw.awc
> Aug 18 23:00:39 awipsfw[22271]: Exiting
> Aug 18 23:00:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:00:39 rpc.ldmd[19568]: child 22271 exited with status 0
> Aug 18 23:01:27 rpc.ldmd[19568]: RPC buffer sizes for admin.awc: send=87380; 
> recv=87380
> Aug 18 23:01:27 admin[22459]: Connection from admin.awc
> Aug 18 23:01:27 admin[22459]: ldmd.c:708: Client LDM closed connection
> Aug 18 23:01:27 admin[22459]: Exiting
> Aug 18 23:01:27 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:01:27 rpc.ldmd[19568]: child 22459 exited with status 0
> Aug 18 23:01:39 rpc.ldmd[19568]: RPC buffer sizes for awipsfw.awc: 
> send=87380; recv=87380
> Aug 18 23:01:39 awipsfw[22518]: Connection from awipsfw.awc
> Aug 18 23:01:39 awipsfw[22518]: Exiting
> Aug 18 23:01:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:01:39 rpc.ldmd[19568]: child 22518 exited with status 0
> Aug 18 23:02:39 awipsfw[22925]: Connection from awipsfw.awc
> Aug 18 23:02:39 awipsfw[22925]: Exiting
> Aug 18 23:02:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:02:39 rpc.ldmd[19568]: child 22925 exited with status 0
> Aug 18 23:03:21 rpc.ldmd[19568]: RPC buffer sizes for localhost.localdomain: 
> send=87380; recv=87888
> Aug 18 23:03:21 localhost[23230]: Connection from localhost.localdomain
> Aug 18 23:03:21 localhost[23230]: ldmprog_4: ldmping from 
> localhost.localdomain
> Aug 18 23:03:21 localhost[23230]: ldmd.c:708: Client LDM closed connection
> Aug 18 23:03:21 localhost[23230]: Exiting
> Aug 18 23:03:21 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:03:21 rpc.ldmd[19568]: child 23230 exited with status 0
> Aug 18 23:03:39 rpc.ldmd[19568]: RPC buffer sizes for awipsfw.awc: 
> send=87380; recv=87380
> Aug 18 23:03:39 awipsfw[23243]: Connection from awipsfw.awc
> Aug 18 23:03:39 awipsfw[23243]: Exiting
> Aug 18 23:03:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:03:39 rpc.ldmd[19568]: child 23243 exited with status 0
> Aug 18 23:03:52 192.168.31.66[19570]: ERROR: requester6.c:457; 
> ldm_clnt.c:277: Couldn't connect to LDM 6 on 192.168.31.66 using either port 
> 388 or portmapper; ldm_clnt.c:116: : RPC: Remote system error - Connection 
> timed out
> Aug 18 23:03:52 192.168.31.66[19570]: Sleeping 30 seconds before retrying...
> Aug 18 23:04:22 192.168.31.66[19570]: Desired product class: 
> 20050818220422.920 TS_ENDT {{FNEXRAD,  ".*"}}
> Aug 18 23:04:39 awipsfw[23385]: Connection from awipsfw.awc
> Aug 18 23:04:39 awipsfw[23385]: Exiting
> Aug 18 23:04:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:04:39 rpc.ldmd[19568]: child 23385 exited with status 0
> Aug 18 23:05:39 awipsfw[23564]: Connection from awipsfw.awc
> Aug 18 23:05:39 awipsfw[23564]: Exiting
> Aug 18 23:05:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:05:39 rpc.ldmd[19568]: child 23564 exited with status 0
> Aug 18 23:06:27 rpc.ldmd[19568]: RPC buffer sizes for admin.awc: send=87380; 
> recv=87380
> Aug 18 23:06:27 admin[23743]: Connection from admin.awc
> Aug 18 23:06:27 admin[23743]: ldmd.c:708: Client LDM closed connection
> Aug 18 23:06:27 admin[23743]: Exiting
> Aug 18 23:06:27 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:06:27 rpc.ldmd[19568]: child 23743 exited with status 0
> Aug 18 23:06:39 rpc.ldmd[19568]: RPC buffer sizes for awipsfw.awc: 
> send=87380; recv=87380
> Aug 18 23:06:39 awipsfw[23801]: Connection from awipsfw.awc
> Aug 18 23:06:39 awipsfw[23801]: Exiting
> Aug 18 23:06:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:06:39 rpc.ldmd[19568]: child 23801 exited with status 0
> Aug 18 23:07:39 awipsfw[24182]: Connection from awipsfw.awc
> Aug 18 23:07:39 awipsfw[24182]: Exiting
> Aug 18 23:07:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:07:39 rpc.ldmd[19568]: child 24182 exited with status 0
> Aug 18 23:08:21 rpc.ldmd[19568]: RPC buffer sizes for localhost.localdomain: 
> send=87380; recv=87888
> Aug 18 23:08:21 localhost[24492]: Connection from localhost.localdomain
> Aug 18 23:08:21 localhost[24492]: ldmprog_4: ldmping from 
> localhost.localdomain
> Aug 18 23:08:21 localhost[24492]: ldmd.c:708: Client LDM closed connection
> Aug 18 23:08:21 localhost[24492]: Exiting
> Aug 18 23:08:21 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:08:21 rpc.ldmd[19568]: child 24492 exited with status 0
> Aug 18 23:08:39 rpc.ldmd[19568]: RPC buffer sizes for awipsfw.awc: 
> send=87380; recv=87380
> Aug 18 23:08:39 awipsfw[24502]: Connection from awipsfw.awc
> Aug 18 23:08:39 awipsfw[24502]: Exiting
> Aug 18 23:08:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:08:39 rpc.ldmd[19568]: child 24502 exited with status 0
> Aug 18 23:09:39 awipsfw[24648]: Connection from awipsfw.awc
> Aug 18 23:09:39 awipsfw[24648]: Exiting
> Aug 18 23:09:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:09:39 rpc.ldmd[19568]: child 24648 exited with status 0
> Aug 18 23:10:39 awipsfw[24830]: Connection from awipsfw.awc
> Aug 18 23:10:39 awipsfw[24830]: Exiting
> Aug 18 23:10:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:10:39 rpc.ldmd[19568]: child 24830 exited with status 0
> Aug 18 23:10:40 192.168.31.66[19570]: ERROR: requester6.c:457; 
> ldm_clnt.c:277: Couldn't connect to LDM 6 on 192.168.31.66 using either port 
> 388 or portmapper; ldm_clnt.c:116: : RPC: Remote system error - Connection 
> timed out
> Aug 18 23:10:40 192.168.31.66[19570]: Sleeping 30 seconds before retrying...
> Aug 18 23:11:10 192.168.31.66[19570]: Desired product class: 
> 20050818221110.986 TS_ENDT {{FNEXRAD,  ".*"}}
> Aug 18 23:11:27 rpc.ldmd[19568]: RPC buffer sizes for admin.awc: send=87380; 
> recv=87380
> Aug 18 23:11:27 admin[25012]: Connection from admin.awc
> Aug 18 23:11:27 admin[25012]: ldmd.c:708: Client LDM closed connection
> Aug 18 23:11:27 admin[25012]: Exiting
> Aug 18 23:11:27 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:11:27 rpc.ldmd[19568]: child 25012 exited with status 0
> Aug 18 23:11:39 rpc.ldmd[19568]: RPC buffer sizes for awipsfw.awc: 
> send=87380; recv=87380
> Aug 18 23:11:39 awipsfw[25070]: Connection from awipsfw.awc
> Aug 18 23:11:39 awipsfw[25070]: Exiting
> Aug 18 23:11:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:11:39 rpc.ldmd[19568]: child 25070 exited with status 0
> Aug 18 23:12:39 awipsfw[25450]: Connection from awipsfw.awc
> Aug 18 23:12:39 awipsfw[25450]: Exiting
> Aug 18 23:12:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:12:39 rpc.ldmd[19568]: child 25450 exited with status 0
> Aug 18 23:13:21 rpc.ldmd[19568]: RPC buffer sizes for localhost.localdomain: 
> send=87380; recv=87888
> Aug 18 23:13:21 localhost[25759]: Connection from localhost.localdomain
> Aug 18 23:13:21 localhost[25759]: ldmprog_4: ldmping from 
> localhost.localdomain
> Aug 18 23:13:21 localhost[25759]: ldmd.c:708: Client LDM closed connection
> Aug 18 23:13:21 localhost[25759]: Exiting
> Aug 18 23:13:21 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:13:21 rpc.ldmd[19568]: child 25759 exited with status 0
> Aug 18 23:13:39 rpc.ldmd[19568]: RPC buffer sizes for awipsfw.awc: 
> send=87380; recv=87380
> Aug 18 23:13:39 awipsfw[25769]: Connection from awipsfw.awc
> Aug 18 23:13:39 awipsfw[25769]: Exiting
> Aug 18 23:13:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:13:39 rpc.ldmd[19568]: child 25769 exited with status 0
> Aug 18 23:14:39 awipsfw[25913]: Connection from awipsfw.awc
> Aug 18 23:14:39 awipsfw[25913]: Exiting
> Aug 18 23:14:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:14:39 rpc.ldmd[19568]: child 25913 exited with status 0
> Aug 18 23:15:37 awipsfw[26088]: Connection from awipsfw.awc
> Aug 18 23:15:39 awipsfw[26093]: Connection from awipsfw.awc
> Aug 18 23:15:39 awipsfw[26093]: Exiting
> Aug 18 23:15:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:15:39 rpc.ldmd[19568]: child 26093 exited with status 0
> Aug 18 23:16:07 awipsfw[26088]: ldmd.c:708: Client LDM closed connection
> Aug 18 23:16:07 awipsfw[26088]: Exiting
> Aug 18 23:16:07 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:16:07 rpc.ldmd[19568]: child 26088 exited with status 0
> Aug 18 23:16:27 rpc.ldmd[19568]: RPC buffer sizes for admin.awc: send=87380; 
> recv=87380
> Aug 18 23:16:27 admin[26277]: Connection from admin.awc
> Aug 18 23:16:27 admin[26277]: ldmd.c:708: Client LDM closed connection
> Aug 18 23:16:27 admin[26277]: Exiting
> Aug 18 23:16:27 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:16:27 rpc.ldmd[19568]: child 26277 exited with status 0
> Aug 18 23:16:39 rpc.ldmd[19568]: RPC buffer sizes for awipsfw.awc: 
> send=87380; recv=87380
> Aug 18 23:16:39 awipsfw[26329]: Connection from awipsfw.awc
> Aug 18 23:16:39 awipsfw[26329]: Exiting
> Aug 18 23:16:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:16:39 rpc.ldmd[19568]: child 26329 exited with status 0
> Aug 18 23:17:29 192.168.31.66[19570]: ERROR: requester6.c:457; 
> ldm_clnt.c:277: Couldn't connect to LDM 6 on 192.168.31.66 using either port 
> 388 or portmapper; ldm_clnt.c:116: : RPC: Remote system error - Connection 
> timed out
> Aug 18 23:17:29 192.168.31.66[19570]: Sleeping 30 seconds before retrying...
> Aug 18 23:17:39 awipsfw[26737]: Connection from awipsfw.awc
> Aug 18 23:17:39 awipsfw[26737]: Exiting
> Aug 18 23:17:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:17:39 rpc.ldmd[19568]: child 26737 exited with status 0
> Aug 18 23:17:59 192.168.31.66[19570]: Desired product class: 
> 20050818221759.051 TS_ENDT {{FNEXRAD,  ".*"}}
> Aug 18 23:18:21 rpc.ldmd[19568]: RPC buffer sizes for localhost.localdomain: 
> send=87380; recv=87888
> Aug 18 23:18:21 localhost[27052]: Connection from localhost.localdomain
> Aug 18 23:18:21 localhost[27052]: ldmprog_4: ldmping from 
> localhost.localdomain
> Aug 18 23:18:21 localhost[27052]: ldmd.c:708: Client LDM closed connection
> Aug 18 23:18:21 localhost[27052]: Exiting
> Aug 18 23:18:21 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:18:21 rpc.ldmd[19568]: child 27052 exited with status 0
> Aug 18 23:18:39 rpc.ldmd[19568]: RPC buffer sizes for awipsfw.awc: 
> send=87380; recv=87380
> Aug 18 23:18:39 awipsfw[27058]: Connection from awipsfw.awc
> Aug 18 23:18:39 awipsfw[27058]: Exiting
> Aug 18 23:18:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:18:39 rpc.ldmd[19568]: child 27058 exited with status 0
> Aug 18 23:19:39 awipsfw[27200]: Connection from awipsfw.awc
> Aug 18 23:19:39 awipsfw[27200]: Exiting
> Aug 18 23:19:39 rpc.ldmd[19568]: SIGCHLD
> Aug 18 23:19:39 rpc.ldmd[19568]: child 27200 exited with status 0
> 
> --------------050309020000040000000504--

Regards,
Steve Emmerson