[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[IDD #DDW-625612]: iperf server at Unidata?



Art,

> > The iperf server exited for some reason.  I tightened up security a bit and 
> > restricted
> > access to the PSU hosts in this list (ones that have already hit the iperf 
> > server) and
> > have restarted the server.
> >
> > 128.118.41.83
> > 128.118.46.104
> > 128.118.52.83
> > 128.118.99.87
> > 128.118.146.152
> 
> Thanks.  Can you add two more systems?
> 
> ldm.meteo.psu.edu   128.118.28.12
> test1.meteo.psu.edu 128.118.28.245

I've added these as well.  Can you confirm the other five are part of your test?

> The test1 system is actually one of the new systems that will replace our
> existing (old) ldm system and is the one I want to do most of our testing
> from.
> 
> On a different note, Jeff Wolfe (our college network expert) asked if
> idd.unidata.ucar.edu has been tuned for network performance.  He's pushing
> our ISP to find out why our bandwidth to ucar.edu currently appears to be
> limited and would like to make sure the machines involved in the problem
> are configured according to:
> 
> http://www.psc.edu/networking/projects/tcptune/#Linux
>
> In particular, he recommended I configure my systems with:
> 
> echo "4096 9000000 9000000" > /proc/sys/net/ipv4/tcp_rmem
> echo "4096 9000000 9000000" > /proc/sys/net/ipv4/tcp_wmem
> echo 9000000 > /proc/sys/net/core/wmem_max
> echo 9000000 > /proc/sys/net/core/rmem_max
> 
> Do you know if a similar kind of tuning has been done with the
> idd.unidata.ucar.edu systems and, if not, whether that could be
> considered?

Different nodes in the IDD cluster have been tuned to differing extents
and we're willing to make changes.  We're familiar with the website Jeff 
mentions and have made similar changes on yakov and the idd cluster -- and 
we've also found that too high of values cause performance issues as well. 
I'm concerned about the iperf throughput you're getting to yakov as it's 
a lightly loaded, very fast machine, well tuned, and connected via jumbo
frames gigabit networking.  This is our default Linux tuning setup;

echo 2500000 > /proc/sys/net/core/wmem_max
echo 2500000 > /proc/sys/net/core/rmem_max
echo "4096 5000000 5000000" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 65536 5000000" > /proc/sys/net/ipv4/tcp_wmem

mike

Ticket Details
===================
Ticket ID: DDW-625612
Department: Support IDD
Priority: Normal
Status: Closed