[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[LDM #MIW-275261]: some questions on hardware configs for our LDM server



Hi Greg (with CC to Sen),

Two of us (Steve Emmerson and I) have been working with Sen to get
a better understanding of the LDM data feeds and processing on
titan.met.sjsu.edu, and to do some tuning to help titan function better.
A lot has been learned over the past week, and Sen asked that we update
you on what has been found and changed, and the effect it has had on
titan's performance.

First, what has been done

- increase the size of the LDM queue from 500 MB (which is the default)
  to 12 GB

  The first thing that was found on titan was the LDM queue was
  not sized properly to handle the volume of data that titan
  was/is REQUESTing.

  Our "rule of thumb" is to try and size one's LDM queue so that it
  will hold about 1 hour of received data.  The principle reason for
  this is that the LDM can reject newly received products if they
  have the same MD5 signature as ones already in the LDM queue.
  Rejection of duplicate products decreases the number/volume of
  products that get processed by LDM pattern-action file actions.

  A quick look at the real time stats that titan has been sending back
  to us (links are included below) showed that the volume of
  data being written into titan's LDM queue was peaking at over 100 GB
  per hour.  This anomalous volume was a BIG red flag for us since the
  feeds being REQUESTed do not contain this much data.  The cause of
  the anomalous volume of data being inserted into titan's LDM queue
  was that duplicate products were not being rejected as the product
  residency time in the LDM queue was on the order of a few seconds,
  not the recommended one hour.  The extremely low residency time resulted
  in some products that had been received not being processed at all,
  and in a LOT of products that had been received and processed being
  received and processed again.

- we find it most useful for sites running the LDM to monitor system
  performance by running the LDM utility 'ldmadmin addmetrics' once per
  minute from a cron job

  Since this was not being done on titan, it was added.  The log file
  used for performance monitoring is ~ldm/logs/metrics.log.  This
  file will be "rotated" (renamed to metrics.log.1, etc.) periodically
  by another action in 'ldm's crontab.

  One of the things that the metrics.txt log file contains is the age of
  the oldest product in the LDM queue. It was by turning on generation
  of the metrics.log file that the extremely short residency time in
  the LDM queue was found.  Again, when metrics monitoring was first
  turned on, the residency time (which is indicated by the age of the
  oldest product in the queue) was found to be only a few seconds, not
  an hour like is recommended.

  Steve and my hunch was that the anomalously high volume of data flowing
  into titan's LDM queue was a result of products being received and
  inserted into the LDM queue multiple times. This can easily happen when
  there are redundant REQUESTs for feeds.

  In order to increase product residency times, titan's LDM queue
  was increased from 500 MB to 12 GB.  It was felt that making the queue
  any larger would interfere with the processing that is being done on
  titan since titan only has 24 GB of RAM.

  After increasing the LDM queue size, we saw a significant decrease
  in the volume of data received being reported in the real time stats
  that titan is sending to us.  We interpret this as an indication that
  rejection of products received more than once is now working as designed.
  The decrease in data being received has also resulted in a decrease in
  the amount of processing (decoding) that titan is doing since it no longer
  is running the same actions on duplicate data.

- eliminate duplicate LDM feed REQUESTs while making sure that all
  of the data that was being REQUESTed is still being REQUESTed

  It turns out that there was quite a bit of duplication of REQUESTs
  for a number of the data feeds that titan was asking for. The duplicated
  feed REQUESTs were eliminated, and this resulted in a further decrease
  in not only the data being received by titan, but in the amount of
  LDM-related processing that titan is doing (e.g., decoding data
  into GEMPAK-compatible formats, etc.).

- consolidated feed REQUESTs

  After letting the LDM run for awhile after increasing the size of
  the LDM queue and eliminating duplicate feed REQUESTs and while
  monitoring the reported reception latencies for the various feeds,
  efforts turned to consolidating feed REQUESTs where consolidation made
  sense.  This was done to decrease the number of LDM processes that
  run continuously.

  Keeping down the number of feed REQUESTs that a system makes will
  help to decrease the LDM related processing load as there will be
  fewer REQUESTing processes. At the same time, splitting feed
  REQUESTs for high volume feeds like what is being done for CONDUIT
  has the beneficial effect of minimizing reception latencies.
  Exactly how one should consolidate some feed REQUESTs while splitting
  others is totally dependent on the number of products and volumes
  in the feeds being REQUESTed. Smart splitting of feed REQUESTs
  is also dependent on the LDM/IDD Product IDs for the products in
  a feed. Each CONDUIT product, for instance, has a sequence number
  as the last value in the Product ID. This sequence number makes it
  easy to split the feed in a way that all data is REQUESTed. The
  HRRR products in the FSL2 feed that originates at NOAA/GSD, on the
  other hand, are not easily split into multiple, disjoint REQUESTs.
  The other feeds that suffer from the same problem as FSL2 are NGRID,
  FNMOC and HDS, and this is a problem for NGRID and FNMOC as they are
  both high volume feeds that have lots of products. NEXRAD2, which
  contains NEXRAD Level 2 volume scan chunks, is a bit easier to split
  as is NEXRAD3, which contains NEXRAD Level 3 products. NEXRAD2 is
  one of the higher volume feeds in the IDD, and NEXRAD3 is the
  feed that has the most number of products per hour.

- installed the latest LDM release, ldm-6.13.6

  The latest release of the LDM was installed to make sure that none
  of what is being seen is a result of an older and possibly less
  efficient LDM.

  It was not expected that this would result in significant
  improvements in terms of getting and processing data, and it did not.
  Nonetheless, it is always the best idea to be running the latest
  version of any software since newer releases typically are more
  efficient and many times have new features that can be useful.

Where titan stands now:

- titan is now processing a LOT more of the data that it is
  REQUESTing than it was before the changes outlined above

  Also, the processing impact on the file system as measured
  by I/O wait has decreased substantially. This is a very good thing
  since the multiple processing of data that had already had been
  received was likely causing performance problems indicated by
  very high I/O waits that you reported in a previous email.

- splitting high volume feeds to minimize their product latencies

  An attempt was made to split the single FSL2 HRRR feed REQUEST into
  thirds with the hope that this would help reduce the receipt latency
  for those  HRRR products. The latencies for the FSL2 HRRR products have
  decreased a bit, but not as nicely/substantially/much as the latencies
  for the CONDUIT feed. The reason for this is that the FSL2 HRRR
  products Product IDs do not lend themselves to an easy split of
  feed REQUEST(s).  In practice this means that one needs to have
  sufficient network bandwidth to receive the high volume FSL2
  feed in a single feed REQUEST.

  The latencies for NGRID products remain high enough that not
  all of the NGRID products are actually being received, since their
  residency time in the upstream LDM queue(s) is not larger than
  the time it takes for the products to be received on titan.
  The problem with the NGRID feed is exactly analogous to that
  with the FSL2 feed.

  FNMOC latencies also remain high, but it appears that all of the
  products are received and processed for most hours.

  NEXRAD2 latencies also remain very high. In fact, a lot of the
  time NEXRAD2 products are not being received since the latencies
  exceed their residency time in the upstream LDM's queue. This
  is a problem that may be mitigated (but possibly not solved)
  by smart splitting of the NEXRAD2 feed REQUEST into multiple,
  disjoint REQUESTs.

Observations:

- After the experimentation with consolidating and splitting feed
  REQUESTs, it is our opinion that the principle cause of the inability
  to receive all of the data being REQUESTed is the network bandwidth
  available to titan

  We say this after reviewing a couple plots of the metrics being gathered
  by 'ldmadmin plotmetrics'.  We encourage you to run 'ldmadmin plotmetrics'
  as user 'ldm' to get an idea of the variety of metrics that are now being
  gathered once per minute.

  In particular, we encourage you to look at:

  - the plot that shows the time series of CPU modes and I/O wait

    This plot shows that titan is idle much of the time, so the
    machine is not the cause of data not being processed.

  - the plot that shows the time series of CPU load

    NB: titan has 24 CPUs, so load averages of up to 10 should not be
    interpreted as being overly high.  Our interpretation of this plot
    is that titan does get busy, but not criplingly so.

  Our view is also based on a careful review of the real time stats
  latencies being reported by titan back to us. You can generate current
  plots of latencies for all feeds being requested online at:

  Unidata HomePage
  http://www.unidata.ucar.edu

    Data -> IDD Operational Status
    http://rtstats.unidata.ucar.edu/rtstats/

      Statistics by Host
      http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/siteindex

        titan.met.sjsu.edu [6.13.6]
        
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/siteindex?titan.met.sjsu.edu

- further increasing the size of the LDM queue could result in even
  fewer products being redundantly received (i.e., more duplicates
  being rejected), and this would, in turn, reduce the processing
  load even more

  NB: The only way that the LDM queue could/should be increased to
  something like the maximum amount of data being received in an
  hour is by increasing the amount of RAM installed in titan.

  The following cumulative volume summary listing from titan shows what
  is currently being received by titan:

http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/rtstats_summary_volume?titan.met.sjsu.edu

Data Volume Summary for titan.met.sjsu.edu

Maximum hourly volume  60026.065 M bytes/hour
Average hourly volume  34232.870 M bytes/hour

Average products per hour     387841 prods/hour

Feed                           Average             Maximum     Products
                     (M byte/hour)            (M byte/hour)   number/hour
FSL2                  11177.067    [ 32.650%]    17054.068    15580.956
CONDUIT                7957.826    [ 23.246%]    21930.253    89129.044
NGRID                  4936.321    [ 14.420%]    12914.648    32253.000
NEXRAD2                3070.162    [  8.968%]    11036.379    33935.333
NEXRAD3                2916.793    [  8.520%]     3480.750   122637.156
FNMOC                  1968.504    [  5.750%]    10018.393     3435.956
HDS                    1190.873    [  3.479%]     1848.008    43404.244
FSL3                    608.333    [  1.777%]      644.139       52.911
FNEXRAD                 130.943    [  0.383%]      155.958      104.156
NIMAGE                   98.102    [  0.287%]      146.058      124.044
UNIWISC                  96.527    [  0.282%]      143.653       49.422
IDS|DDPLUS               76.936    [  0.225%]       95.393    47044.200
EXP                       4.067    [  0.012%]       17.561       31.400
LIGHTNING                 0.416    [  0.001%]        0.917       59.111

  If there was enough memory (e.g., 96 GB or more), we would recommend that
  the LDM queue size be increased the to about 60 GB.  If it is only
  possible to increase titan's RAM to 64 GB, the LDM queue size could be
  increased to 35 GB.  Either of these would likely have beneficial
  effects. Our motto is "the more RAM the better" :-)

- it was verified that the Ethernet interface (em1) on titan is running at
  1 Gbps (1000 Mbps) (ethtool em1 | grep Speed)

  This was checked since we have seen some institutions running their
  Ethernet interface at 100 Mbps, and this is not fast enough to get
  all of the data desired.  The fact that the Ethernet interface is
  being run at 1 Gbps tells us that the inability to get all of the FSL2
  HRRR data in a more timely manner in a single REQUEST would not be
  solved by installing a higher speed (e.g., 10 Gbps) Ethernet interface,
  and that the limiting factor is likely the network bandwidth available
  to titan.

Our recommendations:

- install more RAM in titan and then

- further increase the LDM queue size

  Exactly how much will depend on how much more RAM can be added to
  titan.

- meet with SJSU network folks to see if there is any way more network
  resources could be made available to titan

  If the available network bandwidth is limited, then we recommend:

  - reducing the set of data being REQUESTed on titan

    Things to consider are:

    Are all of the products in the FSL2 HRRR feed being used?

    Are all of the products in the CONDUIT feed being used?

    Are all of the products in the NEXRAD2 and NEXRAD3 feeds being used?

    Are all of the products in the NGRID, FNMOC and HDS feeds being
    used?

  The volume of data in the other feeds being REQUESTed is relatively
  small, so there is not much to be gained by restricting what is
  REQUESTed in those feeds.

Final comments:

- we realize that the information contained above is pretty dense and
  so may be hard to wrap one's mind around all at once

  Please send any/all questions that occur to you, and we will try to
  be more clear.

- titan is a decent machine

  titan can be used effectively by itself or in combination with new
  equipment to be purchased under the Unidata equipment grant that SJSU
  was awarded.

- no "heroic" effort is needed to configure any new machine to be able
  to handle the data that is desired

  By "heroic", we mean that switching to use of SSDs is probably not needed.

- one of the best expenditures of money when purchasing new or upgrading
  old equipment is to buy more RAM

  More RAM in titan would allow for the LDM queue size to be increased,
  and this should have the beneficial effect of decreasing the volume
  of data to be processed (by LDM rejection of duplicate products), and
  this, in turn, will lower the impact (e.g., I/O wait) on the file system.

  As a comparison, the machines we are running here in Unidata that receive
  and process ALL of the data available in the IDD are configured with a
  minumum of 192 GB of RAM.  Those machines are also running the ZFS
  file system, but we believe that current implementations of XFS should
  also work very well.


Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: MIW-275261
Department: Support LDM
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.