Re: [ldm-users] high memory

We've considered reiserFS off and on, but the lack of continued development has slowed us down for adoption. The last time we did an exhaustive evaluation, ext3 was new and ext4 wasn't even a pip-dream. XFS we've lived with and benefitted from some of the evolution. fscking an ext2 partition... I'd almost consider buying a new disk (I mean, really, the technology does change that fast now) and formatting, so I agree with you.

We do not see the unlinking problem because our datasets are not that volatile... We are saving a LOT of data, so we don't unlink that often.

I do agree with your first statement completely. It's in the list of things we don't really talk about on the 'Net, along with religion and politics. Oh! 'scuse me! Favorite file systems combine religion and politics!

gerry

Stonie R. Cooper wrote:
A favorite file system is like a favorite beer or color. It is really a matter of taste.

Kevin had asked what was wrong with XFS . . . and for volatile datasets, like met data, it can have performance issues in the lower level unlinking step. Some of the following links show that.

Tyler's ext2, also as illustrated in the benchmarks, is relatively fast . . . until you have to fsck a 1T partition. But as he eluded, it may be just as well to reformat as opposed to fscking.

And I know the dude killed his wife, but reiserfs is still really hard to beat. Like ext2, it is relatively simple in design (albeit a completely different design), with the benefits of journaling. The problem for most on this list is it just is not normally part of Fedora/CentOS/RHEL. I go out of my way to use it except for on the boot partition . . . for trouble-free/speedy disk I/O.

The other thing to consider is the addition of another level of obfuscation via LVM. RHEL/CentOS like to set up logical volume groups via LVM in an automated load. Unless you are loading a server with hotswap bays and growable RAID . . . it is an additional waste of resources and administrative needs.

Anyway, some benchmarks for those that like that sort of thing:

http://www.t2-project.org/zine/1/

and it's follow up:

http://www.t2-project.org/zine/4/

And another in case you haven't had too much information overload:

http://www.phoronix.com/scan.php?page=article&item=ext4_btrfs_nilfs2&num=1

Stonie

On 10/25/2010 09:13 PM, daryl herzmann wrote:
On Mon, 25 Oct 2010, Jeff Lake - Admin wrote:

what I noticed first with this new system was a horrible lag in
writing the NEXRAD3 and LIGHTNING files to disk watching the incoming
feed ( ldmadmin watch -f NEXRAD3) the file time and my receive time
were the expected 45sec to 90sec delay, but the write to disk was at
least 4 minutes sometimes 6+ later ..

How are you dividing out the write tasks with LDM? Each pqact only has
32 pipes, so if you are attempting to fit too many products through
those available pipes, there will be lags.

Install system monitoring tools like 'dstat' and 'sysstat', monitor the
disk IO rates before you do something drastic like reformatting or
changing filesystems.

When I have seen problems before, it is almost always a LDM
configuration issue and not having enough pqacts running.

daryl

_______________________________________________
ldm-users mailing list
ldm-users@xxxxxxxxxxxxxxxx
For list information or to unsubscribe, visit:
http://www.unidata.ucar.edu/mailing_lists/

_______________________________________________
ldm-users mailing list
ldm-users@xxxxxxxxxxxxxxxx
For list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/

--
Gerry Creager -- gerry.creager@xxxxxxxx
Texas Mesonet -- AATLT, Texas A&M University
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843



  • 2010 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the ldm-users archives: