[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Upgrading CONDUIT with NGI (fwd) (fwd)




===============================================================================
Robb Kambic                                Unidata Program Center
Software Engineer III                      Univ. Corp for Atmospheric Research
address@hidden             WWW: http://www.unidata.ucar.edu/
===============================================================================

---------- Forwarded message ----------
Date: Mon, 15 May 2000 07:47:29 -0600
From: Linda Miller <address@hidden>
To: address@hidden, address@hidden, address@hidden
Subject: Re: Upgrading CONDUIT with NGI (fwd) (fwd)

Hi all,

I thought you might be interested in this exchange of information.

Linda

---------- Forwarded Message ----------
Date: Friday, May 12, 2000 6:06 PM +0000
From: Guy Almes <address@hidden>
To: Linda Miller <address@hidden>
Subject: Re: Upgrading CONDUIT with NGI (fwd)

Linda,
   This is very unusual, and I'd encourage him to raise the issue with his
campus network engineers.  To the degree that we can shed light on it, we'll
be interested in tracking down the bug.
   I do *not* think that this is due to any congestion on Abilene or any of
the Internet2 gigapops.
   Thus, it's probably something not configured quite right, or possibly a
piece of campus infrastructure that is hosed.

   Looking in detail at Robert's note below, I'm curious about why his routing
is as it is.
   Specifically, if both NSBF and NOAA-Boulder are on NISN, then I'm not sure
why they'd use Abilene.
   Maybe Jerry Janssen of Boulder could advise.

   Regards,
         -- Guy

Linda Miller wrote:
>
> Hi Guy,
>
> I hate to bother you with this, but perhaps you could advise me on how to
> answer a question such as this.  I suppose it could be volume or the size of
> the connection or some such thing, but if you could help me or suggest someone
> else who might be able to explain this, I'd really appreciate it.
>
> Thanks so much!
>
> Linda
>
> ---------- Forwarded Message ----------
> Date: Wednesday, May 10, 2000 10:17 PM +0100
> From: Robert Mullenax <address@hidden>
> To: Linda Miller <address@hidden>
> Subject: Re: Upgrading CONDUIT with NGI
>
> You know it's interesting, our routing to our feed site
> (cirrus.al.noaa.gov) used to be strictly on the NASA Science
> Internet route.  A typical packet travel time was 80-100ms.
> Since we are routed through UCAID now, the time is 100-140ms.
> Some of this could be due to increased traffic at the JPL
> router, but for us being on the UCAID route has led to a
> decrease in performance.
>
> Robert Mullenax
> At 03:47 PM 5/10/2000 -0600, you wrote:
> >Hi George,
> >
> >--On Monday, May 8, 2000 11:48 AM -0400 George Huffman
> ><address@hidden> wrote:
> >
> >> Linda et al. - I think the shift to NGI is a really great move.  Just to
> >> make sure we've got our arguements in order, I have a few questions on the
> >> draft list of benefits:
> >>
> >> * Elimination of T1 costs.  True!
> >>   Will CONDUIT be expected to contribute some fraction of the costs of NGI
> >>   connectivity at the point of origin (OSO or NCEP)?
> >
> >I'm unsure of this answer.  As Dave Fulker points out frequently,
> >"there's no free lunch."  The NOAA people should be able to answer this,
> >however from what I've heard, plans are already underway for the
> >connections to be established.  This would be an additional project to be
> >used in conjunction with the new capabilities.
> >
> >> * Elimination of T1 delays.  True!
> >>   Has someone tried to do an estimate of how much bandwidth will
> actually be
> >>   available at the point of origin?
> >
> >Currently T1 (1.544 megabits per second), and I'm told that the initial
> >capabilities with the NGI at NCEP will be operating at OC3 (155 megabits per
> >second), with additional capabilities to be added in the future.
> >
> >>   As well, is there any forecast on the growth of NGI use at the point of
> >>   origin, which might give us insight into future contention problems?
> >
> >That's a question for NOAA people to answer.
> >
> >>   Finally, has thought been given to how many USWRP sites will have direct
> >>   access to NGI?
> >
> >Many of the USWRP sites are already connected to Internet2.  To put it
> simply,
> >the NGI is the government's form of Internet2.  So, government labs are
> >connecting at a rapid pace to NGI and working with universities who are
> >connected to Internet2.
> >
> >If you review the Internet2 URL provided in the plan, there's a list of
> >sites already using Internet2.  It's at:
> >
> >http://www.internet2.edu/html/universities.html#
> >
> >
> >>   At some point in the IDD tree the data will end up being
> >>   pushed down "regular" internet (T1 or worse), which could require some
> >>   careful load regulation at the nodes where the step-down occurs.
> >
> >Unidata has been advocating for university sites to get connected to
> >Internet2. The sites which are unable to be connected to Internet2 can use
> >the FTP process being provided by NCEP and NWS/OSO.
> >
> >Of course, through the use of relay sites, the remaining leaf node sites can
> >make their own decisions as to what data they want/need to receive, and what
> >they turn off.   The LDM handles this quite well.
> >
> >> * Reduction in FTP loading.  Probably true.
> >>   Should we say "at the point of origin" or some such clarification?
> >
> >I guess we could say that.
> >
> >
> >> * Addition of new data sets.  True!
> >>   Is there any thought on how much more?  And is there any idea of how
> >>   much the USWRP sites can handle?
> >
> >Again, I'll refer you to the URL above.
> >
> >>   This is particularly critical for the top level
> >>   sites, which will have to pass through "everything" that's sent, and
> might
> >>   have to negotiate NGI-speed input and regular-speed output.
> >
> >This will need to be a consideration of the CONDUIT Working Group.
> >Otherwise, people will just want everything--all the time-- which of
> >course is impossible.
> >
> >So far, the participants seem to be handling the volume of data.   We will
> >continue to watch the latency stats as new data are added to the system.
> >
> >Perhaps others will chime in....
> >
> >Linda
> >
> >> * Increased participation.  Highly likely.
> >>
> >> Regards,
> >> GJH
> >>
> >> George J. Huffman, Ph.D.  (Voice)  +1 301-614-6308
> >> Sci. Sys. & Appl., Inc.   (FAX)    +1 301-614-5492
> >> NASA/GSFC Code 912        (Email)  address@hidden
> >> Greenbelt, MD 20771 USA   (Office) Bld. 33 Room C410
> >>
> >>
> >
> >Linda Miller - address@hidden
> >External Liaison, Unidata
> >University Corporation for Atmospheric Research
> >P.O. Box 3000
> >Boulder, CO 80307-3000
> >303 497-8646 fax: 303-497-8690
> >URL:  http://www.unidata.ucar.edu/staff/lmiller/un.act.html
> >
>
> ---------- End Forwarded Message ----------
>
> Linda Miller - address@hidden
> External Liaison, Unidata
> University Corporation for Atmospheric Research
> P.O. Box 3000
> Boulder, CO 80307-3000
> 303 497-8646 fax: 303-497-8690
> URL:  http://www.unidata.ucar.edu/staff/lmiller/un.act.html

---------- End Forwarded Message ----------



Linda Miller - address@hidden
External Liaison, Unidata
University Corporation for Atmospheric Research
P.O. Box 3000
Boulder, CO 80307-3000
303 497-8646 fax: 303-497-8690
URL:  http://www.unidata.ucar.edu/staff/lmiller/un.act.html