[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[CONDUIT #OAE-251505]: NCEP CONDUIT feed setup



Hi Carissa,

re:
> Just was wondering if you have been able to connect to the server?
> 
> Probably not since I do still see log entries like -
> Oct 25 11:24:59 vm-lnx-conduit1 rtstats[4084] WARN: Couldn't connect to
> LDM on rtstats.unidata.ucar.edu using either port 388 or portmapper; :
> RPC: Remote system error - Connection refused

Hmm... I just tried this from my home machine, and the connection works
with no problems.  Please try the following and send us the results:

<as 'ldm' on vm-lnx-conduit1>
telnet rtstats.unidata.ucar.edu 388

This should test the ability to connect to rtstats.unidata.ucar.edu
through port 388.

re:
> Let us know if you need anything to assist with getting this going. Thanks,

If the telnet test fails, try it from some other machine in your shop.

FYI: LOTS of .gov machines are reporting real-time stats without problems
to rtstats.unidata.ucar.edu.  Here is one example plot that verifies
this:

http://www.unidata.ucar.edu/cgi-bin/rtstats/iddstats_vol_nc?CONDUIT+conduita.fsl.noaa.gov

> Carissa
> 
> On 10/03/2013 07:56 AM, Carissa Klemmer wrote:
> > The firewall changes have been made on our side. Let me know if you
> > have any issues with any connections. The system is being fed with the
> > same pqact.conf as Boulder, so you should see data right away.
> >
> > Carissa
> >
> > On 09/19/2013 04:37 PM, Unidata CONDUIT Support wrote:
> >> Hi Carissa,
> >>
> >> re:
> >>> I was just wondering if anyone has had a chance to review the email
> >>> I had
> >>> sent. I know you had a busy week last week, but just want to make
> >>> sure this
> >>> is on your radar so we can get a CONDUIT backup going.
> >> I apologize for not being able to respond before now.  I was on
> >> travel all of
> >> last week and have been battling fallout from the severe flooding
> >> that we
> >> have been experiencing here pretty much every day since (sigh).
> >>
> >> re:
> >>> Well...I feel like we will have had this conversation once before :)
> >>> But
> >>> since losing our Silver Spring server we are finally ready to get a
> >>> backup
> >>> going for CONDUIT. Due to reasons out of our control, this will not be
> >>> through the server that we had setup last fall.
> >> OK, this should pose no problems on our side.
> >>
> >> re:
> >>> We have the hardware and software in place, but since it has been a
> >>> year
> >>> I'd like to confirm that the ldmd.conf is still what you would like
> >>> to see
> >>> below. If that is good I will have them add the IP's to the firewall
> >>> in the
> >>> next few days. Once that is done we can start testing again.
> >> Comments are mixed in with the ldmd.conf snippit below.
> >>
> >> re:
> >>> And unless we hear negative feedback from you, the system is actually
> >>> going to be load balanced between 3 virtual systems. So you will only
> >>> request from 1 IP.
> >>>
> >>> 140.90.101.42
> >> Excellent!  This is how we configured the UCAR top level IDD relay,
> >> idd.unidata.ucar.edu.  We use LVS for the cluster director.
> >>
> >> re:
> >>> We are still working with the admins on the amount of RAM available.
> >>> Currently it is 16G. So we could probably make the queue size 10G
> >>> without
> >>> issue. But I have asked them what it would take to up it.
> >> I agree, a 10 GB queue should work nicely on a 16 GB machine. You might
> >> even be able to get away with a 12 GB queue.
> >>
> >> re:
> >>> The only other issue is the request of the NDFD data from tg1 (TOC).
> >>> This
> >>> is not in place yet. Could you let us know if it is still necessary
> >>> for the
> >>> feed?
> >> Unidata users are used to getting these data, so their going away
> >> permanently
> >> might cause some to complain.  We will discuss this with our governing
> >> committees (especially the User's Committee) when they meet here in
> >> Boulder in the beginning of October.  I think, however, that it would be
> >> OK to not have the NDFD portion of the feed working when the new
> >> machines
> >> go live.  We can explain this to the community (IF anyone complains).
> >>
> >> re:
> >>> ALLOW   ANY ^((localhost|loopback)|(127\.**0\.0\.1\.?$))
> >> I don't understand the '.**0' in this ALLOW.  We use:
> >>
> >> allow   ANY
> >> ^((localhost|loopback)|(127\.0\.0\.1\.?$)|([a-z].*\.unidata\.ucar\.edu\.?$))
> >>
> >> The differences are:
> >>
> >> - 127\.0\.0\.1\.?$  vs your  127\.**0\.0\.1\.?$
> >>
> >> - our inclusion of access by all unidata.ucar.edu machines by name
> >>
> >> re:
> >>> # Give permission to the Unidata Program Center
> >>> ALLOW   ANY     ^[a-z].*\.unidata\.ucar\.edu\.**?$
> >> Again, I don' understand the '.**' here.  See above for what we use.
> >>
> >> re:
> >>> # non-UNIDATA CONDUIT hosts
> >>> allow   ANY     ^atm\.cise-nsf\.gov\.?$ .* ^data2/TIGGE_resend
> >> This entry can be removed since atm.cise-nsf.gov no longer exists (we
> >> got asked to shutdown our IDD presence in the NSF machine room in
> >> Vienna, VA because we were using too much bandwidth).
> >>
> >> re:
> >>> allow   ANY ^idd\.aos\.wisc\.edu\.?$        .* ^data2/TIGGE_resend
> >>> allow   ANY     ^flood\.atmos\.uiuc\.edu\.?$    .* ^data2/TIGGE_resend
> >>> allow   ANY     ^(idd-ingest|iddrs3)\.meteo\.**psu\.edu\.?$ .*
> >>> ^data2/TIGGE_resend
> >> Looks good modulo the '.**'.
> >>
> >> re:
> >>> # NCAR
> >>> ALLOW   ANY     ^dataportal\.ucar\.edu\.?$      .*
> >>> ALLOW   ANY     ^datagrid\.ucar\.edu\.?$        .*
> >>> # ECMWF
> >>> ALLOW   ANY     ^193\.61\.196\.74\.?$   .*
> >>> # Test from vm-lnx-ldmex
> >>> ALLOW   ANY     ^140\.90\.100\.93\.?$   .*
> >>> # test from SSEC operated by Unidata
> >>> ALLOW   ANY     ^unidata2-new\.ssec\.wisc\.**edu\.?$      .*
> >> OK on all of these.
> >>
> >> re:
> >>>> Thanks,
> >> Again, many apologies for the extremely long time it took us to
> >> reply to your email!!
> >>
> >> Cheers,
> >>
> >> Tom
> >> --
> >> ****************************************************************************
> >>
> >> Unidata User Support                                    UCAR Unidata
> >> Program
> >> (303) 497-8642 P.O. Box 3000
> >> address@hidden Boulder, CO 80307
> >> ----------------------------------------------------------------------------
> >>
> >> Unidata HomePage http://www.unidata.ucar.edu
> >> ****************************************************************************
> >>
> >>
> >>
> >> Ticket Details
> >> ===================
> >> Ticket ID: OAE-251505
> >> Department: Support CONDUIT
> >> Priority: Normal
> >> Status: Closed
> >>
> >
> 
> 

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: OAE-251505
Department: Support CONDUIT
Priority: Normal
Status: Closed