An LDM cluster is a configuration of multiple computers running multiple LDM-s that appear to be a single computer on which a single LDM is running. Because multiple computers are involved, the effective performance of the cluster (i.e., the number of connections that can be satisfied) can be much greater than that of a single computer (providing that network bandwidth is not a limiting factor). The reliability of a cluster can also be greater than that of a single computer.
The structure and operation of an LDM cluster depends upon the operating systems involved as well as the software package that's used to cluster the computers. Therefore, this webpage will only explain the experimental LDM cluster in-use at the UPC. Hopefully, this description will help you regarding your own cluster.
The architecture of the cluster involves a "director" computer and multiple "backend" computers. The director computer runs Linux and the IP Virtual Server (IPVS) software. Each backend computer runs a default LDM installation on an arbitrary operating-system and doesn't know that it's part of a cluster.
There is a single IP address by which the cluster is known to external hosts. This is the "cluster IP address". All IP packets destined for the cluster IP address go to the director computer.
The IPVS software on the director computer is responsible for forwarding incoming packets received on the LDM port to the backend computers. If a downstream host has an open TCP connection to the cluster IP address or has had one within the last minute, then the software forwards all incoming TCP packets from that host to the same backend computer. Otherwise, an incoming TCP packet is forwarded to the least-loaded backend computer.
The IPVS software on the director computer maintains a pool of backend computers by monitoring their availability. Should one go offline, then it is removed from the pool and packets will no longer be forwarded to it. This allows for easy maintenance of the backend computers. Conversely, should a new backend computer go online, then it is added to the pool of available backend computers.
Each backend computer has two IP addresses corresponding to two network interfaces. One of the IP addresses is specific that that computer; the other IP address is the cluster IP address. The backend computer's cluster IP address is not broadcast via the Address Resolution Protocol (ARP) so it is not known by any routers. The LDM on each backend computer is told to use the computer-specific IP address for requesting data. The network interfaces are either real (implemented in hardware) or virtual (implemented in software).
The LDM systems on the backend computers each have their own product-queue and are individually responsible for requesting and receiving data-products.
A newsletter article on the LDM cluster at the Unidata Program Center can be found at http://www.unidata.ucar.edu/newsletter/2005june/clusterpiece.htm.