[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 20050512: Routing from winderground to idd and rtstats



Jeff,

Steve made a code change that should have resolved this error;

  WARNING: ldm_clnt.c:277: Couldn't connect to LDM 6 on rtstats...

and the idd.unidata "No route to host" looks like some sort of route
flapping.  I searched through logfiles on other systems I have access
to and can't find a single occurence elsewhere.  I don't know what to
think on that one.  As for the last error "Couldn't connect to LDM 6 on
idd.unidata -- Connection refused", I may have the answer.  We're
running a LVS cluster (for idd) and using the persistent flag to
satisfy the "Are you Alive" query from the downstream to the server
back to the same real server.  If one LDM cluster node starts
misbehaving, the downstream will disconnect and reconnect and keep
being redirected back to the same failing node.  We're working on our
logic to detect when a node has gone bad and drop it out of the cluster,
but clearly, we're not quite there yet.

Could you forward or make available via ftp/web your ldmd.log for today?

mike

On May 19, 11:35am, Jeff Masters wrote:
> Subject: Re: 20050512: Routing from winderground to idd and rtstats
>
> Hi Mike, do you have any update on what's going on? We only get a complete
> set of GFS data via CONDUIT about one run out of five.
>
> Thanks, Jeff
> ------------------------------------------------------------------------------
>  Jeff Masters (address@hidden)                               (  )
>  Chief Meteorologist                              /\ Home of the       (    )
>  The Weather Underground, Inc.               /\  /  \  /\       /\    (
     )
>  300 N. Fifth Ave #240                      /  \/    \/  \ /\  /  \    ------
>  Ann Arbor, MI 48104                 ______/              /  \/    \_   \\\\\
>  734-994-8824                                   Weather Underground
     \`\`\
>                                             http://www.wunderground.com
>
> On Thu, 12 May 2005, Jeff Masters wrote:
>
> >
> > Hi Mike, here's the error since the last ldm restart, at 16:14 GMT, up to
> > now, 16:38 GMT:
> >
> > May 12 16:14:04 n8 idd[30688]: Starting Up(6.2.1): idd.unidata.ucar.edu:
> > TS_ZERO TS_ENDT {{CONDUIT,  ".*"}}
> > May 12 16:14:25 n8 idd[30688]: ERROR: requester6.c:457; ldm_clnt.c:277:
> > Couldn't connect to LDM 6 on idd.unidata.ucar.edu using either port 388 or
> > portmapper; ldm_clnt.c:116: : RPC: Remote system error - No route to host
> > May 12 16:24:57 n8 idd[30688]: ERROR: requester6.c:457; ldm_clnt.c:277:
> > Couldn't connect to LDM 6 on idd.unidata.ucar.edu using either port 388 or
> > portmapper; ldm_clnt.c:116: : RPC: Remote system error - Connection
> > refused
> > May 12 16:27:23 n8 rtstats[30679]: WARNING: ldm_clnt.c:277: Couldn't
> > connect to LDM 6 on rtstats.unidata.ucar.edu using either port 388 or
> > portmapper; ldm_clnt.c:116: : RPC: Remote system error - No route to host
> > May 12 16:35:00 n8 idd[30688]: ERROR: requester6.c:457; ldm_clnt.c:277:
> > Couldn't connect to LDM 6 on idd.unidata.ucar.edu using either port 388 or
> > portmapper; ldm_clnt.c:116: : RPC: Remote system error - No route to host
> > May 12 16:35:27 n8 idd[30688]: ERROR: requester6.c:457; ldm_clnt.c:277:
> > Couldn't connect to LDM 6 on idd.unidata.ucar.edu using either port 388 or
> > portmapper; ldm_clnt.c:116: : RPC: Remote system error - No route to host
> > May 12 16:35:33 n8 rtstats[30679]: WARNING: ldm_clnt.c:277: Couldn't
> > connect to LDM 6 on rtstats.unidata.ucar.edu using either port 388 or
> > portmapper; ldm_clnt.c:116: : RPC: Remote system error - No route to host
> > May 12 16:36:01 n8 rtstats[30679]: WARNING: ldm_clnt.c:310: nullproc_6
> > failure to rtstats.unidata.ucar.edu; ldm_clnt.c:145: RPC: Timed out
> >
> > Jeff


NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.