Many new features now available including 2-Factor authentication

Raid 10

Started by makin, October 01, 2010, 11:39:12 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.


Hi Guys,

just wanted to confirm some thing am i right in thinking that you require a 4 drive minimum for raid 10. Ive have recently seen conflicting information on this subject.



Absolute minimum of 4, since the raid10 array is a stripe of mirrors (raid 0 overseeing multiple raid 1 arrays) the number of physical disks clearly has to be an even number (not including online spares).

Saying that, I have never set up such a small raid10 array.  Typically, the arrays I set up for customers are 16 spindles.

EDIT: Ahh, I think I see what you're getting at - as spookily enough I have just logged onto an HP Smart Array BIOS and it clearly says RAID 1+0 on a 2 disk set.  To me, that's just simple mirroring...

Mad Penguin

I'm afraid RAID10 can be set up on any number of disks, so long as you have at least two. I currently have 2,3 and 5 drive sets in operation.

For best performance, use this command to set up;

Quotemdadm --create /dev/md<x> --chunk <y> -p f2 -n <z> {partitions}

Where <x> is a free device, say "0"
Where <y> is your chunk size, currently I'm [typically] using 512 in workstations and 1024 in servers
Where <z> is the number of devices in the RAID
and {partitions} is a space separated list of devices to include (where the number of devices == <z>)

Note, RAID10 and RAID1+0 are two different things .. RAID10 on two disks is essentially RAID1 with one critical difference, it will allow you to stripe read (see the -p f2 above) whereas RAID1 will not.
(i.e. RAID 10 tried to double your read performance)

Sample of RAID10 in action;

Quote[email protected]:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid10 sdc2[0] sde2[1]
      31461376 blocks 1024K chunks 2 far-copies [2/2] [UU]
md2 : active raid10 sdc5[0] sda5[3] sdb5[4] sde5[2] sdd5[1]
      720916480 blocks 1024K chunks 2 far-copies [5/5] [UUUUU]
md3 : active raid10 sde6[1] sdc6[0]
      113321984 blocks 256K chunks 2 far-copies [2/2] [UU]
md1 : active raid10 sde3[1] sdc3[0]
      6297088 blocks 256K chunks 2 far-copies [2/2] [UU]


Yeah, the kernel level RAID10 is a bizarre implementation of RAID10.  I have no clue how this works in practice with an odd number of spindles...

Mad Penguin

Works tho', see disk performance here;

http://linuxforums.org.uk/general-discussion/the-linux-pc-quick-review/msg26562/?topicseen#new (http://linuxforums.org.uk/general-discussion/the-linux-pc-quick-review/msg26562/?topicseen#new)




You know, those stats aren't bad at all - especially as they are cheap & nasty.

I sneakily ran the same command on an HP DL580 G5, on a hardware RAID10 array (Smart Array P400 controller) consisting of 8 x 300Gb SAS disks @ 15,000rpm:

[[email protected]<censored> ~]# hdparm -tT /dev/mapper/vgDataInternal-lvDataInternal

Timing cached reads:   8804 MB in  2.00 seconds = 4406.48 MB/sec
Timing buffered disk reads:  566 MB in  3.01 seconds = 188.35 MB/sec

EDIT: Same machine, second controller (also P400 Smart Array), this time hardware RAID5 (8 x 75GB SAS @ 15,000rpm):


Timing cached reads:   8764 MB in  2.00 seconds = 4385.79 MB/sec
Timing buffered disk reads:  140 MB in  3.03 seconds =  46.25 MB/sec

I ran the RAID5 test several times, and the highest buffered disk read was 50.4 MB/sec


Appreciate the input on that guys.

Mad Penguin

Wow! That really is slow given the kit spec!

This is "the beast" running 9 busy(ish) virtuals under KVM .. (one of which is mail.linux.co.uk ... )

Quote[email protected]:~# hdparm -tT /dev/md2

Timing cached reads:   6724 MB in  2.00 seconds = 3362.74 MB/sec
Timing buffered disk reads:  1120 MB in  3.02 seconds = 370.88 MB/sec

Just to clarify, this is 5 x SATA II on a RAID10 .. (!)
http://linux.co.uk/2010/08/building-a-beast/ (http://linux.co.uk/2010/08/building-a-beast/)

Quote[email protected]:~# virsh list
Id Name                 State
  2 dns2                 running
  3 gluster2             running
  4 linuxmail            running
  5 nx1                  running
  6 pabx_sax             running
  7 pera2                running
  8 wp1                  running
  9 <redact>            running
10 <redact>           running