Art, Pete, et. al.
With the advent of OpenSolaris, Solaris and therefore ZFS is no longer
proprietary (that doesn't include CDE and a few other things). I have zero
interest (and zero capability) of building or tweaking my own OS and thus just
install Solaris 10, but you can get the complete source code to Solaris
including ZFS and roll your own.
ZFS is intended for use with JBOD, it is really intended to replace Hardware
RAID, but you can use it with hardware RAID. Tests as shown that in general
ZFS outperforms HW RAID. It is RAM hungry and works best on 64-bit machines.
Currently you cannot have a boot disk with ZFS, it's only for data disks. That
capability is under testing right now and might be in the latest OpenSolaris
I use ZFS here at CSBF with all our machines. I have stopped trying to be any
sort of Solaris advocate, but if you really need this thing to be reliable then
Solaris+ZFS is the only way to go, IMHO. I frankly have never understood the
business model of paying engineers to develop something (ZFS) only to give it
away, but I suppose ZFS should be running in some form on Linux now, although I
would not imagine it's ready for prime time. If it were me I'd just download
and install Solaris 10 Update 4 for free, and buy a $240 per year support
contract in case you ever had an issue, otherwise just download the free driver
and security patches and have a go. Just my two cents..no more Solaris talk
From: ldm-users-bounces@xxxxxxxxxxxxxxxx on behalf of Arthur A. Person
Sent: Mon 10/8/2007 12:10 PM
To: Pete Pokrandt
Cc: ldm-users@xxxxxxxxxxxxxxxx; support@xxxxxxxxxxxxxxxx
Subject: Re: [ldm-users] Best linux file system for data on large raid5array?
I've read good things about Sun's zfs... doesn't ever have to be fsck'd
which, to me, is the scariest thing about n-TB systems. I'm getting ready
to try one of these in real life so I can't say anything about it from
experience. It's downside might be that it's proprietary (you have to run
Solaris) and it seems to want to run its own software raid... I don't know
whether it would make sense to run it on top of a hardware raid system or
On Mon, 8 Oct 2007, Pete Pokrandt wrote:
> What filesystem type are people using for data storage on linux?
> I have a 5+ Tb archive that's sitting on a hardware raid5, using
> reiserfs (Reiserfsprogs-3.6.19 on CentOS), and just recently I started
> getting hard machine crashes when trying to write to that file system. I
> did a reiserfsck --rebuild-tree on it (since a -check reported that I
> needed to) and now about 1/5 of the data that was on it is either gone
> or in the lost+found directory named with inode names.
> This is the second time now that I've had a reiserfs file system go
> kablooey on me.
> I'm considering toasting the whole thing and rebuilding with a different
> file system type, but I'm not sure what is most reliable/best
> performance for this kind of usage. It's a combination of lots of large
> files (i.e. GRIB/GRIB2 model data files and gempak of the same) and also
> lots of smaller files, i.e. nexrad level 3, lots of small files in a
> bunch of directories.
> I've read that ext3 (linux default) is extremely stable but can be slow.
> Other choices would be jfs, xfs, others??
> Any suggestions or experiences would be appreciated.
> ^ Pete Pokrandt V 1447 AOSS Bldg 1225 W Dayton St^
> ^ Systems Programmer V Madison, WI 53706 ^
> ^ V poker@xxxxxxxxxxxx ^
> ^ Dept of Atmos & Oceanic Sciences V (608) 262-3086 (Phone/voicemail) ^
> ^ University of Wisconsin-Madison V (608) 262-0166 (Fax) ^
> ldm-users mailing list
> For list information or to unsubscribe, visit:
Arthur A. Person
Research Assistant, System Administrator
Penn State Department of Meteorology
email: person@xxxxxxxxxxxxx, phone: 814-863-1563
ldm-users mailing list
For list information or to unsubscribe, visit: