I am working on setting up a large RAID setup for a variety of functions that aren’t very performance critical. I’m trying to use LVM on the filesystem to allow me to carve up the space for different functions with the flexibility to adjust things later if necessary.
However I’m finding the performance of the LVM based filesystem is many times slower than the raw partition. I guess I expect some overhead but not 10x.
Specifics of my test configuration:
Dell PowerEdge 1950 with 12G RAM
Dell PERC 6/E RAID controller
Dell MD1000 Controller
8 x Seagate 2TB SATA drives in RAID-6 config (otherwise using default settings in OpenManage)
Operating System is CentOS 6.3
Total space available is 12TB and partitioned using parted as follows:
Disk /dev/sdb: 12.0TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 10.0TB 10000GB data1 lvm
2 10.0TB 12.0TB 1999GB ext4 data3
I then created a 2TB ext4 filesystem mounted on /data2 using /dev/sdb2 physical partition and created a 2TB logical volume using the LVM partition and also formatted it with ext4 mounted on /data2
Then when I compare basic read/write performance the difference is shocking. Using:
time sh -c “dd if=/dev/zero of=ddfile bs=8k count=2000000 && sync”
and
time dd =if=ddfile of=/dev/null bs=8k
Below are the average of several attempts
Raw partition:
Read: 48.3 secs
Write: 70.2 secs
LVM partition:
Read: 288.3 secs
Write: 701.6 secs
I’m stumped as to why it’s so much slower