[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[LDM #AFT-567406]: GOES-17 data filing question



Hi Mike,

Along one of the (many) other lines of investigation:

Can you send us the output of:

<as 'ldm' on vulcan>
ldmadmin config

Another thing:

Are you gathering system metrics using 'ldmadmain addmetrics' run out
of cron?  If no, here is how we gather metrics on our systems:

#
# LDM metrics gathering
#
* * * * * bin/ldmadmin addmetrics > /dev/null
0 0 1 * * bin/ldmadmin newmetrics > /dev/null

When 'gnuplot' is installed on the local system, and when metrics
are gathered, one can produce a set of plots that detail how
the local LDM and system are doing.  What we would be looking
for are the plots that shows the age of the oldest product,
system load and cpu context switching.  The age of the oldest
product in the queue plot is very useful in judging if the
local LDM queue is large enough.

Another thing that we ran into on one of our public facing
data servers: we found that we needed to more the LDM queue
to a RAM disk.  Before the move, the reported latencies had
gotten to be some of the worst we had seen even though the
relays from which it was REQUESTing data were local, and after
the latencies dropped to zero.  We made this move after running
'iotop' (as root) and noticing that at least one of the LDM
server instances (ldmd) was showing over 99% I/O values.  After
moving the queue to RAM disk, the I/O values dropped back to
very the very low levels that had been the norm.  While we
can't say that we completely understand the need for the
queue move, our speculation has revolved around either some
unexpected problem with the disk (we use ZFS, so we don't think
that this was a problem) or with an OS upgrades that included
firmware to patch Intel bugs.  Our canvassing of comments by
others showed that performance could be affected by up to 30%,
and this kinda fit with the degradation that we observed.

As I commented during our phone conversation, there are _so_
many things that could be causing what you are seeing :-(

That's it for this morning...

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: AFT-567406
Department: Support LDM
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.