[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[IDDBrasil #XPZ-611749]: Re: 20060609: CONDUIT Latency (cont.)



Hi again Waldenio,

re: The preliminary indications were that splitting the CONDUIT requests
helped decrease your latencies.

The indication now is that splitting the feed had little or no effect
on your latencies.  The next step is to reorganize the request lines
from 5 to 10.  This would make the example I sent previously:

request CONDUIT "MT.gfs_CY.(00|12).*[05]$" idd.cise-nsf.gov PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*[16]$" idd.cise-nsf.gov PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*[27]$" idd.cise-nsf.gov PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*[38]$" idd.cise-nsf.gov PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*[49]$" idd.cise-nsf.gov PRIMARY

change to:

request CONDUIT "MT.gfs_CY.(00|12).*0$" idd.unidata.ucar.edu PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*1$" idd.unidata.ucar.edu PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*2$" idd.undiata.ucar.edu PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*3$" idd.unidata.ucar.edu PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*4$" idd.unidata.ucar.edu PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*5$" idd.unidata.ucar.edu PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*6$" idd.unidata.ucar.edu PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*7$" idd.unidata.ucar.edu PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*8$" idd.unidata.ucar.edu PRIMARY
request CONDUIT "MT.gfs_CY.(00|12).*9$" idd.unidata.ucar.edu PRIMARY

Since you are requesting data every 6 hours, you will, of course, need
to adjust the patterns to specify every 6 yours.  

Please note:
I am also asking you to feed off of idd.unidata.ucar.edu instead
of idd.cise-nsf.gov.  The reason for this change is we are seeing
bad packet loss on idd.cise-nsf.gov at the moment.  We believe the
problem is due to a buss conflict between the gigabit Ethernet
adapter and the fiber channel adapter for idd.cise-nsf.gov's RAID.

I must reiterate a comment I have made several times in the past (including
my last email:

  We currently believe that the he culprit in your data transfer problems
  is some sort of artificial limiting somewhere in the link from the US to
  Brazil.  We need to identify where this bottleneck is and work to resolve/fix
  it.

Steve Emmerson and I looked at the CONDUIT latency numbers on moingobe
this morning in comparison to the latencies seen in a variety of
other feeds (e.g., HDS, IDS|DDPLUS, NNEXRAD, etc.).  The fact that
you can get these other streams with virtually no latency tells us that
there _is_ some sort of artificial throttling of the CONDUIT feed.  If
the problem was a lack of network bandwidth, then all feeds should show
increased latencies as the CONDUIT volume increases.  Since this is _not_
seen, it most likely means that you have enough bandwidth to do what you
want to do, but someone has put in place some filtering.  Let's find out
who or what is limiting your feeds!

Cheers,

Tom
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: XPZ-611749
Department: Support IDD Brasil
Priority: Normal
Status: Closed