[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[IDD #DDH-304651]: latentcy problems between idd.unidata.ucar.edu and freshair.atmos.washington.edu



Hi Harry,

re: high latencies for WMO feedtypes on freshair
> Moving to SSEC did not fix the problem.  I tried goofing around with the TCP
> parameters under Linux.  I think the problem is the new congestion control 
> for TCP.

We were scratching our heads over this one as well.  Do you think that the new
congestion control for TCP adequately explains why the latencies for high
volume feeds like CONDUIT, NEXRAD2, and NIMAGE remain low while the latencies
for the more modest IDS|DDPLUS|HDS (aka WMO) feed was excessively high?

> If you read the file Documentation/networking/tcp.txt in the Linux kernel
> source, they state that the congestion control was changed starting with 
> 2.6.13.
> The default on my system under 2.6.15.4 ending up being the "BIC" mechanism.
> Things got somewhat better when I switched to "reno".    However, I had the
> best success going back to a 2.6.11.12 kernel.

What (UTC) time did you switch to the 2.6.11.12 kernel?  The latency plot for
IDS|DDPLUS:

http://www.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?IDS|DDPLUS+freshair.atmos.washington.edu

dropped to zero at more-or-less 20 UTC.

> I don't have time to work on this further before vacation this week.  There is
> an interesting document I just found on TCP tuning:
> 
> http://www-didc.lbl.gov/TCP-tuning/linux.html
> 
> It may be interesting to change some of the parameters mentioned.  At the
> current time I have the following tuning parameters set in /etc/sysctl.conf:
> 
> net/core/wmem_max=2500000
> net/core/rmem_max=2500000
> net/ipv4/tcp_rmem="4096 5000000 5000000"
> net/ipv4/tcp_wmem="4096 65536 5000000"

I know that Mike will be interested in this since we found that we needed to
tune the TCP parameters in order to get good performance in our data relay tests
for TIGGE.  Just in case you hadn't heard, we demonstrated the ability to move
approx. 17 GB/hr from ECMWF to NCAR with very low latencies over a multi-day
period last month.  This was only possible after the NCAR machine's TCP stack
was tuned, however.  Before the tuning, the latencies hovered at 1 hour and data
was being continuously lost.

Cheers,

Tom
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: DDH-304651
Department: Support IDD
Priority: Normal
Status: Closed