[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[LDM #MIW-275261]: some questions on hardware configs for our LDM server



Greg,

> Question.  You refer to a switch port facing us vs titan.  The graphs I
> sent are munin host based graphs.  They are of titan itself.

Are you saying that the plot is for a network interface card on Titan? If so, 
then that would make sense.

I'm used to seeing plots for switches and not hosts.

The bandwidth utilization indicated in the graphs *should* be sufficient to 
handle all your requested feeds *on average*. The high latencies you see in 
some of the feeds are likely due to surges in volume that cause products to be 
deleted from the product-queue before they can be used to reject duplicates.

This latency -- and the concomitant increase in bandwidth usage due to the 
non-rejection of duplicates -- can be reduced by 1) increasing the size of the 
queue (something for which more memory is needed); or 2) decreasing the maximum 
allowable latency parameter in the LDM registry (which is 3600 seconds by 
default). Titan's maximum hourly volume is about 60 GB/h and its queue is about 
12 GB, so the maximum latency parameter should be no greater than (12 GB)/(60 
GB/h) = 0.2 h = 720 s. Call it 600 s to allow for sub-hour surges and overhead. 
The value for this parameter can be fine-tuned via the "ldmadmin plotmetrics" 
command and examination of the time-series plot "Age of the Oldest Product".

The maximum latency parameter can be adjusted by editing the file 
~ldm/etc/registry.xml and restarting the LDM.

Please let us know if and when you do this.

Regards,
Steve Emmerson

Ticket Details
===================
Ticket ID: MIW-275261
Department: Support LDM
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.