[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[TIGGE #BGR-338705]: A problem when perform LDM tests



YangXin,

> After contacting CSTNET, we have performed LDM tests in two phase. In phase I,
> we tested data exchange between two servers on either side of CMA firewall, in
> phase II, we went to CSTNET to do similar tests. The data used for tests are
> real TIGGE Data.

Excellent!

> In phase I, one server is a real server with one dual-core 64-bit XEON CPU, 
> the
> OS is RHEL4U4, the other is a virtual machine (VM) within my laptop computer 
> as I
> described in my previous mail. When the real server acted as the upstream and 
> the
> VM acted as downstream, the data transfer rate was about 50-60 Mbit/s, 
> sometime I
> saw 70Mbit/s (The tool I used to monitor the network performance is "iptraf" 
> which
> part of RedHat Linux). While in the conversed direction, the VM acted as 
> upstream,
> the transfer rate averages at 10-15Mbit/s, the highest is not over 20Mbit/s. 
> It seems
> that the upstream server consumes more resources than downstream (maybe 
> because the
> TIGGE Script invoke many pqinsert processes).

If the downstream LDM did no processing of the data by the
"pqact" utility, then it would execute fewer processes than
the upstream LDM.

> It seemed that the performance in CMA is good, I performed the phase II
> tests. I tested only one direction, the real server in CMA acted as upstream,
> my laptop VM acted as downstream. In my VM, I can see the transfer rate is
> about 20Mbit/s, the peak is over 30Mbit/s. Since my VM only have 512M RAM,
> I set the PQ size as 500MB, therefore, I usually tranfer not more than 500MB 
> of
> data for each test,

Since new data received will overwrite old data in the LDM queue, and since
you are not trying to do anything with the data after it is received in this
test, there is no reason to limit yourself to sending only 500 MB.  Also, it
is most likely that your VM session cannot memory map a 512 MB queue, so
the VM LDM is probably memory mapping one product in the queue at a time.
This would result in a lower throughput to your VM LDM.

> in these cases, almost all the data could reach the
> downstream.

Are you saying that some data did _not_ make it to the downstream?

> May I conclude that there is also no problems with the network
> between CSTNET and CMA.

The phase II test results are consistent with no problems
with the network between CMA and the location within CSTNET
where the test was made (see below, however).

> However, CuiYueming, one of the CSTNET technican who helped me doing the 
> tests,
> told me that they do have restrictions for those P2P applications (the limit 
> is
> 1Mbytes/s), I'm not sure if or not the LDM acts similar to a P2P application?

A gateway or router cannot distinguish between a P2P
application and the LDM unless it has an internal database
of what port-usage constitutes a P2P application (which
is unlikely).  Both P2P applications and the LDM use TCP
connections, so an LDM connection is likely to look
exactly like a P2P connection.  

A limit on P2P connections of 1 *megabit* per second
per well-known port (e.g., port 388) is completely consistent
with the observed throughput of your LDM system before
using 'balance' to send the traffic on port 8080.

At this time, you should work with CuiYueming at CSTNET to
remove any restrictions or limitations on port 388 so that
meteorological data can be exchanged using that port.  I would
point out to CuiYueming that the LDM is _not_ a
P2P application since data is being moved, not music, pictures,
video, etc.

Cheers,

Tom
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: BGR-338705
Department: Support IDD TIGGE
Priority: Normal
Status: Closed