Re: [ldm-users] raid / Re: Fedora 7 redux: BUSTED!

A colleague here has had some problems with the Areca controllers when exposed, during operation, to temperatures in excess of 140deg F. Be forewarned that, if your data center gets that hot, your Areca card may die and take your hard disks with it. Or the other way around. Yeah, that happened. When temps got under control and he went in to survey the damage the Areca card went into an auto-rebuild mode and as with most good home renovation, performed demolition first.

RAID 6 is the way we're going simply because we want to survive the 2-disk failures we've seen in the past (twice I can think of).

gerry

Rob Cermak wrote:
RAID 1?  I like Gerry's idea of moving from RAID 5 to 6.

A recent hardware raid driver that moved into recent kernels is the Areca
driver.  We have this card sitting in a 16 SATA drive array and we have
it currently broken up into 4 raid volumes.

We had a 750GB disk die. The volume affected went in to degraded mode. We swapped the disk out and the volume regenerated automatically. Took
4.5 hours, but it regenerated.  During this 4.5 hour rebuild, two things
can happen:  (1) you can lose another disk and your whole volume is
toast, (2) find out the new disk you swapped in is bad.  Anyway, this
whole proceedure did not require any downtime.  The OS (Mandriva 2007)
didn't notice a thing.  Granted I/O was a bit slower during the degraded
period and rebuild.

http://www.areca.com.tw/

The raid card requires a PCI-X slot.  But it is a hardware raid and I
think servicable from the Linux side, but we use the NIC port on the card
to talk to the built in webserver.  The NIC card can also be configured
to send SNMP messages or email when states change on the raid.  The raid
card also emits a loud beep (much like a UPS overload) that you can
acknowledge and turn off.  That takes care of problems where email
notification fails.

Another group referred us to this card and to a company that sells the
card in the enclosures, if you want that referral, I will email it to you
off-list.  Or, you can roll your own as the card itself can be found.

Rob

# dmesg | grep -i areca

ARECA RAID ADAPTER0: FIRMWARE VERSION V1.39 2006-2-9
scsi0 : Areca SATA Host Adapter RAID Controller( RAID6 capable)

At RAID 5, 4 x 500 GB SATA volume will net you about 1.4Tb of space.

Filesystem            Size  Used Avail Use% Mounted on
/dev/sdd1             1.4T  472G  834G  37% /arsc

# lspci | grep -i areca
02:0e.0 RAID bus controller: Areca Technology Corp. ARC-1160 16-Port
PCI-X to SATA RAID Controller

# uname -a
Linux some.host.edu 2.6.22.1 #1 SMP Thu Jul 12 16:44:37 AKDT 2007 i686
Dual Core AMD Opteron(tm) Processor 285 GNU/Linux

On Mon, October 15, 2007 9:44 pm, Gilbert Sebenste wrote:
On Mon, 15 Oct 2007, Michael Dross wrote:

Hope this is not too far off topic...

SAS is SCSI. "Serial Attached SCSI" It's just a "newer/better" version
so to
speak. SAS drives are to SATA, like
SCSI is/was to IDE... in terms of performance... if that makes any
sense.
Overly simplified, but hopefully
draws a connection.

Now the confusing part is that a SAS designed backplane and controller
will
work with SATA II drives.
But most folks that have paid the premium for SAS raid controllers,
need the
performance and usually install
SAS drives, despite their higher cost.
Cool. Well, here's where I ask another question.

Starting in February, as UNIDATA points out oh so well, for those of us
who love the Level 2 radar data...we're going to love it a lot more. To
the tune of 2.3 times more, in terms of file size. Only the lower tilts
will have the "super resolution", but let's face it: those file sizes
aren't going to be small.

So I am thinking this. I am on a pretty tight budget, and yet I want the
Level 2 data...from every site...

I buy a RAID 1 array. This means I have two 750 GB SATA drives, running
SATA 1 until either the Kernel or the OS or the hardware firmware gets
straightened out. I have 1.5 GB/sec throughput on each drive. If one hard
drive blows up, everything is still cool and things keep chugging along.
And, I (hope) things can be rebuilt on the blown second drive
automagically.

So my questions are:

1. Is this going to be fast enough to handle Level 2 data starting next
spring?

2. How do you set this up?

3. What specific hardware is needed? (Yes, I've never done this before.)

4. Or do I tell my boss that I REALLY need SCSI or SAS drives to do a
RAID 1 array with what I am going to do?

*******************************************************************************
Gilbert Sebenste
********
(My opinions only!)
******
Staff Meteorologist, Northern Illinois University
****
E-mail: sebenste@xxxxxxxxxxxxxxxxxxxxx
***
web: http://weather.admin.niu.edu                                      **
*******************************************************************************
_______________________________________________
ldm-users mailing list
ldm-users@xxxxxxxxxxxxxxxx
For list information or to unsubscribe,  visit:
http://www.unidata.ucar.edu/mailing_lists/




--
Gerry Creager -- gerry.creager@xxxxxxxx
Texas Mesonet -- AATLT, Texas A&M University
Cell: 979.229.5301 Office: 979.862.3982 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843



  • 2007 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the ldm-users archives: