Extremely poort performance with LVM vs. RAW disc

I am working on setting up a large RAID setup for a variety of functions that aren’t very performance critical. I’m trying to use LVM on the filesystem to allow me to carve up the space for different functions with the flexibility to adjust things later if necessary.

However I’m finding the performance of the LVM based filesystem is many times slower than the raw partition. I guess I expect some overhead but not 10x.

Specifics of my test configuration:
Dell PowerEdge 1950 with 12G RAM
Dell PERC 6/E RAID controller
Dell MD1000 Controller
8 x Seagate 2TB SATA drives in RAID-6 config (otherwise using default settings in OpenManage)
Operating System is CentOS 6.3

Total space available is 12TB and partitioned using parted as follows:
Disk /dev/sdb: 12.0TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size     File system  Name   Flags
 1      17.4kB  10.0TB  10000GB               data1  lvm
 2      10.0TB  12.0TB  1999GB   ext4         data3

I then created a 2TB ext4 filesystem mounted on /data2 using /dev/sdb2 physical partition and created a 2TB logical volume using the LVM partition and also formatted it with ext4 mounted on /data2

Then when I compare basic read/write performance the difference is shocking. Using:
time sh -c “dd if=/dev/zero of=ddfile bs=8k count=2000000 && sync”
and
time dd =if=ddfile of=/dev/null bs=8k
Below are the average of several attempts

Raw partition:
Read: 48.3 secs
Write: 70.2 secs

LVM partition:
Read: 288.3 secs
Write: 701.6 secs

I’m stumped as to why it’s so much slower

Hi, your problem is almost certainly your RAID setup.

In general hardware RAID should be avoided, Dell implementations especially. Configure up each drive as a RAID-0 or JBOD device (depending on your RAID BIOS) and then use Linux RAID10 or ZFS-Z2 on top. I would be expecting the setup you describe to be delivering ~ 350Mb/sec, whereas with a software RAID implementation you should see closer to 1000Mb/sec on that hardware.

There are many (subtle) reasons, and you can tune the performance up, bit it simply isn’t worth is given the underlying overheard your hardware RAID is already imposing. (You can check to see if your RAID driver supports the merge_bvec_fn() function, I’m guessing not …)

I would strongly recommend using ZFS on a Raid-Z2, which is effectlively RAID-6, but it will stripe-read across all devices, so you should see a full 900Mb/sec+ AND you can allocate block devices directly from ZFS pools without using LVM.

Eg;

zfs create -b 4096 -V 12G pool/images/block1 

Will create a block device;

/dev/pool/images/block1

This device is a sparse device, so it will only eat space as you need it, AND you can turn on implicit compression, AND you can turn on automatic deduplication. Makes LVM look a little old / sad … :slight_smile:

There is a Howto on getting ZFS going here … Linux.UK - Articles, News and Events for all things Linux