[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[LDM #WXT-754815]: Blocked by Jeff Weber



Hi Grant,

re: was there anything unusual going on at DRI recently?

> Over the weekend, Saturday July 21, DRI was having some networking
> connectivity issues. I do not know exactly what the issue was, but we
> received multiple reports that our websites were unreachable.

OK.  We saw the problem with LDM connections from DRI yesterday (Wednesday),
not over the weekend.

re:
> We are
> running a bit of an older LDM version, 6.5.1, are there known issues when
> there are connectivity problems?

No.  That being said, it would be a good idea to upgrade to a recent version
of the LDM.  Steve Emmerson (the LDM developer here) is working on a new
version, so you will want to wait for that one (v6.10.2) before upgrading
if you want to upgrade to v6.10.x.

re:
> Unfortunately our LDM logs only go back about 24 hours, otherwise I would
> make them available to you.

OK, thanks for looking.

re:
> We do not request CONDUIT ".*", but here is what our CONDUIT requests
> looked like until yesterday:
> 
> request        CONDUIT         "(nam.*awip3d*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(nam.*awip12*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(nam.*awphys*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrbf0*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrbf1*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrbf2*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrbf3*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrbf4*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrbf5*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrbf6*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrbf7*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrbf8*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrbf9*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrb2f0*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrb2f1*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrb2f2*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrb2f3*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrb2f4*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrb2f5*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrb2f6*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrb2f7*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrb2f8*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(gfs.*pgrb2f9*)" thelma.ucar.edu         
> PRIMARY
> request        CONDUIT         "(MT.gfs_CY|MT.nam.*#212)" thelma.ucar.edu     
>     PRIMARY

Is the bandwidth into your shop so small that you determined that you needed
to split your feed this finely?

re: 
> And as of yesterday, to reduce our number of connections, we have changed
> to:
> 
> request CONDUIT         "(nam.*awip3d*)"           idd.unidata.ucar.edu
> request CONDUIT         "(nam.*awip12*)"           idd.unidata.ucar.edu
> request CONDUIT         "(nam.*awphys*)"           idd.unidata.ucar.edu
> request CONDUIT         "(gfs.*pgrbf[012]*)"       idd.unidata.ucar.edu
> request CONDUIT         "(gfs.*pgrbf[345]*)"       idd.unidata.ucar.edu
> request CONDUIT         "(gfs.*pgrbf[6789]*)"      idd.unidata.ucar.edu
> request CONDUIT         "(gfs.*pgrb2f[01234]*)"    idd.unidata.ucar.edu
> request CONDUIT         "(gfs.*pgrb2f[56789]*)"    idd.unidata.ucar.edu
> request CONDUIT         "(MT.gfs_CY|MT.nam.*#212)" idd.unidata.ucar.edu

This is much better from our point of view.  Each connection REQUEST
results in a separate rpc.ldmd (LDM versions < 6.10.x)/ldmd (LDM
versions 6.10.x+) invocation.  Each one consumes resources on our
end, so it is best when downstreams (like you) can limit the number
of REQUESTs.

Question:

- can the pgrbf and pgrb2f requests be further consolidated without
  affecting product receipt latency?

re:
> I hope this is helpful. Let me know if there's anything more I can do.

Thanks, this is helpful.

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: WXT-754815
Department: Support IDD
Priority: Normal
Status: Open


NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.